Ramblings of a software tester…developer…tester…

Ramblings of a tester who is a programmer who is a tester who is a developer…

Go JIRA API client

Hi folks.

So, I was playing around and created a client for JIRA written in Go. It was nice to do some JSON transformation. And sending POSTS was really trivial.

It’s still in it’s infancy and I have a couple of more features I want to implement, but, here is the code…

package main
 
import (
	"bytes"
	"encoding/json"
	"flag"
	"fmt"
	"io/ioutil"
	"log"
	"net/http"
	"os"
 
	"github.com/BurntSushi/toml"
)
 
var configFile = "~/.jira_config.toml"
var parameter string
 
var flags struct {
	Comment     string
	Description string
	IssueKey    string
	Priority    string
	Resolution  string
	Title       string
	Project     string
}
 
//Issue is a representation of a Jira Issue
type Issue struct {
	Fields struct {
		Project struct {
			Key string `json:"key"`
		} `json:"project"`
		Summary     string `json:"summary"`
		Description string `json:"description"`
		Issuetype   struct {
			Name string `json:"name"`
		} `json:"issuetype"`
		Priority struct {
			ID string `json:"id"`
		} `json:"priority"`
	} `json:"fields"`
}
 
//Transition defines a transition json object. Used for starting, stoppinp
//generally for state stranfer
type Transition struct {
	Fields struct {
		Resolution struct {
			Name string `json:"name"`
		} `json:"resolution"`
	} `json:"fields"`
	Transition struct {
		ID string `json:"id"`
	} `json:"transition"`
}
 
//Credentials a representation of a JIRA config which helds API permissions
type Credentials struct {
	Username string
	Password string
	URL      string
}
 
func init() {
	flag.StringVar(&flags.Comment, "m", "Default Comment", "A Comment when changing the status of an Issue.")
	flag.StringVar(&flags.Description, "d", "Default Description", "Provide a description for a newly created Issue.")
	flag.StringVar(&flags.Priority, "p", "2", "The priority of an Issue which will be set.")
	flag.StringVar(&flags.IssueKey, "k", "", "Issue key of an issue.")
	flag.StringVar(&flags.Resolution, "r", "Done", "Resolution when an issue is closed. Ex.: Done, Fixed, Won't fix.")
	flag.StringVar(&flags.Title, "t", "Default Title", "Title of an Issue.")
	flag.StringVar(&flags.Project, "o", "IT", "Define a Project to create a ticket in.")
	flag.Parse()
}
 
func (cred *Credentials) initConfig() {
	if _, err := os.Stat(configFile); err != nil {
		log.Fatalf("Error using config file: %v", err)
	}
 
	if _, err := toml.DecodeFile(configFile, cred); err != nil {
		log.Fatal("Error during decoding toml config: ", err)
	}
}
 
func main() {
	if len(flag.Args()) < 1 {
		log.Fatal("Please provide an action to take. Usage information:")
	}
	parameter = flag.Arg(0)
	switch parameter {
	case "close":
		closeIssue(flags.IssueKey)
	case "start":
		startIssue(flags.IssueKey)
	case "create":
		createIssue()
	}
}
 
func closeIssue(issueKey string) {
	if issueKey == "" {
		log.Fatal("Please provide an issueID with -k")
	}
	fmt.Println("Closing issue number: ", issueKey)
 
	var trans Transition
 
	//TODO: Add the ability to define a comment for the close reason
	trans.Fields.Resolution.Name = flags.Resolution
	trans.Transition.ID = "2"
	marhsalledTrans, err := json.Marshal(trans)
	if err != nil {
		log.Fatal("Error occured when marshaling transition: ", err)
	}
	fmt.Println("Marshalled:", trans)
	sendRequest(marhsalledTrans, "POST", issueKey+"/transitions?expand=transitions.fields")
}
 
func startIssue(issueID string) {
	if issueID == "" {
		log.Fatal("Please provide an issueID with -i")
	}
 
	fmt.Println("Starting issue number:", issueID)
}
 
func createIssue() {
	fmt.Println("Creating new issue.")
	var issue Issue
	issue.Fields.Description = flags.Description
	issue.Fields.Priority.ID = flags.Priority
	issue.Fields.Summary = flags.Title
	issue.Fields.Project.Key = flags.Project
	issue.Fields.Issuetype.Name = "Task"
	marshalledIssue, err := json.Marshal(issue)
	if err != nil {
		log.Fatal("Error occured when Marshaling Issue:", err)
	}
	sendRequest(marshalledIssue, "POST", "")
}
 
func sendRequest(jsonStr []byte, method string, url string) {
	cred := &Credentials{}
	cred.initConfig()
	fmt.Println("Json:", string(jsonStr))
	req, err := http.NewRequest(method, cred.URL+url, bytes.NewBuffer(jsonStr))
	req.Header.Set("Content-Type", "application/json")
	req.SetBasicAuth(cred.Username, cred.Password)
 
	client := &http.Client{}
	resp, err := client.Do(req)
	if err != nil {
		panic(err)
	}
	defer resp.Body.Close()
 
	fmt.Println("response Status:", resp.Status)
	fmt.Println("response Headers:", resp.Header)
	body, _ := ioutil.ReadAll(resp.Body)
	fmt.Println("response Body:", string(body))
 
}

It can also be found under my github page: GoJira Github.

Feel free to open up issues if you would like to use it and need some features which you would find interesting. Currently the username and password for the API are stored in a local config file in your home folder. Later on, I’ll add the ability to have a token rather than a username:password combination.

Thanks for reading!
Gergely.

The One Hundred Day GitHub Challenge

Hello folks.

Today, I present to you the One Hundred Day Github Challenge.

The rules are simple:

  1. Minimum of One commit every day for a Hundred days.
  2. Commit has to be meaningful but can be as little as a fix in a Readme.md.
  3. Doesn’t matter if you are on vacation, there are no exceptions.
  4. There. Are. No. Exceptions.
  5. If you fail a day, you have to start over.
  6. No cheating. You only cheat yourself, so this is really up to you.

Let me be more clear here, because it seems I wasn’t clear enough. What you make out of this challenge, it’s up to you. If you just update a readme.md for hundred days, that’s fine. Just do it every day. It’s a commitment. At least you’ll have a nice Readme.

Also, let me be clear on another thing. THERE ARE NO EXCEPTIONS. Even on holidays. No. Exceptions.

So there you have it. It’s easy, but then again, it’s not.

Mine starts today! 100…

Thanks for reading.
And happy coding.
Gergely.

Go Progress Quest

Hi Folks.

I started to build a Progress Quest type of web app in Go.

If you’d like to join, or just tag along, please drop by here => Go Progress Quest and feel free to submit an issue if you have an idea, or would like to contribute!

I will try and document the Progress…

Thank you for reading!
Gergely.

Kill a Program on Connecting to a specific WiFi – OSX

Hi folks.

If you have the tendency, like me, to forget that you are on the corporate VPN, or leave a certain software open when you bring your laptop to work, this might be helpful to you too.

It’s a small script which kills a program when you change your Wifi network.

Script:

#!/bin/bash
 
function log {
    directory="/Users/<username>/wifi_detect"
    log_dir_exists=true
    if [ ! -d $directory ]; then
        echo "Attempting to create => $directory"
        mkdir -p $directory
        if [ ! -d $directory ]; then
            echo "Could not create directory. Continue to log to echo."
            log_dir_exists=false
        fi
    fi
    if $log_dir_exists ; then
        echo "$(date):$1" >> "$directory/log.txt"
    else
        echo "$(date):$1"
    fi
}
 
function check_program {
    to_kill="[${1::1}]${1:1}"
    log "Checking if $to_kill really quit."
    ps=$(ps aux |grep "$to_kill")
    log "ps => $ps"
    if [ -z "$ps" ]; then
	# 0 - True
        return 0
    else
	# 1 - False
        return 1
    fi
}
 
function kill_program {
    log "Killing program"
    `pkill -f "$1"`
    sleep 1
    if ! check_program $1 ; then
	log "$1 Did not quit!"
    else
	log "$1 quit successfully"
    fi
}
 
wifi_name=$(networksetup -getairportnetwork en0 |awk -F": " '{print $2}')
log "Wifi name: $wifi_name"
if [ "$wifi_name" = "<wifi_name>" ]; then
    log "On corporate network... Killing Program"
    kill_program "<programname>"
elif [ "$wifi_name" = "<home_wifi_name>" ]; then
    # Kill <program> if enabled and if on <home_wifi> and if Tunnelblick is running.
    log "Not on corporate network... Killing <program> if Tunnelblick is active."
    if ! check_program "Tunnelblick" ; then
	log "Tunnelblick is active. Killing <program>"
	kill_program "<program>"
    else
	log "All good... Happy coding."
    fi
else
    log "No known Network..."
fi

Now, the trick is, on OSX to only trigger this when your network changes. For this, you can have a ‘launchd’ daemon, which is configured to watch three files which relate to a network being changed.

The script sits under your ~/Library/LaunchAgents folder. Create something like, com.username.checknetwork.plist.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" \
 "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>ifup.ddns</string>
 
  <key>LowPriorityIO</key>
  <true/>
 
  <key>ProgramArguments</key>
  <array>
    <string>/Users/username/scripts/ddns-update.sh</string>
  </array>
 
  <key>WatchPaths</key>
  <array>
    <string>/etc/resolv.conf</string>
    <string>/Library/Preferences/SystemConfiguration/NetworkInterfaces.plist</string>
    <string>/Library/Preferences/SystemConfiguration/com.apple.airport.preferences.plist</string>
  </array>
 
  <key>RunAtLoad</key>
  <true/>
</dict>
</plist>

Now, when you change your network, to whatever your corporate network is, you’ll kill Sublime.

Hope this helps somebody.
Cheers,
Gergely.

Jenkins Job DSL and Groovy goodness

Hi Folks.

Ever used Job DSL plugin for Jenkins? What is that you say? Well, it’s TEH most awesome plug-in for Jenkins to have, because you can CODE your job configuration and put it under source control.

Today, however, I’m not going to write about that because the tutorials on Jenkins JOB DSL are very extensive and very well done. Anyone can pick them up.

Today, I would like to write about a part of it which is even more interesting. And that is, extracting re-occurring parts in your job configurations.

If you have jobs, which have a common part that is repeated everywhere, you usually have an urge to extracted that into one place, lest it changes and you have to go an apply the change everywhere. That’s not very efficient. But how do you do that in something which looks like a JSON descriptor?

Fret not, it is just Groovy. And being just groovy, you can use Groovy to implement parts of the job description and then apply that implementation to the job in the DSL.

Suppose you have an email which you send after every job for which the DSL looks like this:

job('MyTestJob') {
    description '<strong>GENERATED - do not modify</strong>'
    label('machine_label')
    logRotator(30, -1, -1, 5)
    parameters {
        stringParam('somestringparam', 'default_valye', 'Description')
    }
    wrappers {
        timeout {
            noActivity(600)
            abortBuild()
            failBuild()
            writeDescription('Build failed due to timeout after {0} minutes')
        }
    }
    deliveryPipelineConfiguration("Main", "MyTestJob")
    wrappers {
        preBuildCleanup {
            deleteDirectories()
        }
        timestamps()
    }
    triggers {
        cron('H 12 * * 1,2')
    }
    steps {
        batchFile(readFileFromWorkspace('relative/path/to/file'))
    }
            publishers {
                wsCleanup()
                extendedEmail('email@address.com', '$DEFAULT_SUBJECT', '$DEFAULT_CONTENT') {
                    configure { node ->
                        node / presendScript << readFileFromWorkspace('email_templates/emailtemplate.groovy')
                        node / replyTo << '$DEFAULT_REPLYTO'
                        node / contentType << 'default'
                    }
                    trigger(triggerName: 'StillUnstable', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Fixed', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Failure', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                }
 
            }
}

Now, that big chunk of email setting is copied into a bunch of files, which is pretty ugly. And once you try to change it, you’ll have to change it everywhere. Also, the interesting bits here are those readFileFromWorkspace parts. Those allow us to export even larger chunks of the script into external files. Now, because the slave might be located somewhere else, you should not use new File(‘file’).text in your job DSL. readFileFromWorkspace in the background does that, but applies correct way to the PATH it looks on for the file specified.

Let’s put this into a groovy script, shall we? Create a utilities folder where the DSL is and create a groovy file in it like this one:

package utilities
 
public class JobCommonTemplate {
    public static void addEmailTemplate(def job, def dslFactory) {
        String emailScript = dslFactory.readFileFromWorkspace("email_template/EmailTemplate.groovy")
        job.with {
            publishers {
                wsCleanup()
                extendedEmail('email@address.com', '$DEFAULT_SUBJECT', '$DEFAULT_CONTENT') {
                    configure { node ->
                        node / presendScript << emailScript
                        node / replyTo << '$DEFAULT_REPLYTO'
                        node / contentType << 'default'
                    }
                    trigger(triggerName: 'StillUnstable', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Fixed', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                    trigger(triggerName: 'Failure', subject: '$DEFAULT_SUBJECT', body: '$DEFAULT_CONTENT', replyTo: '$DEFAULT_REPLYTO', sendToDevelopers: true, sendToRecipientList: true)
                }
 
            }
        }
    }
}

The function addEmailTemplate gets two parameters. A job, which is an implementation of a Job, and a dslFactory which is a DslFactory. That factory is an interface which defines our readFileFromWorkspace. Where do we get the implementation from then? That will be from the Job. Let’s alter our job to apply this Groovy script.

import utilities.JobCommonTemplate
 
def myJob = job('MyTestJob') {
    description '<strong>GENERATED - do not modify</strong>'
    label('machine_label')
    logRotator(30, -1, -1, 5)
    parameters {
        stringParam('somestringparam', 'default_valye', 'Description')
    }
    wrappers {
        timeout {
            noActivity(600)
            abortBuild()
            failBuild()
            writeDescription('Build failed due to timeout after {0} minutes')
        }
    }
    deliveryPipelineConfiguration("Main", "MyTestJob")
    wrappers {
        preBuildCleanup {
            deleteDirectories()
        }
        timestamps()
    }
    triggers {
        cron('H 12 * * 1,2')
    }
    steps {
        batchFile(readFileFromWorkspace('relative/path/to/file'))
    }
}
 
JobCommonTemplate.addEmailTemplate(myJob, this)

Notice three things here.

#1 => import. We import the script from utilities folder which we created and placed the script into it.
#2 => def myJob. We create a variable which will contain our job’s description.
#3 => this. ‘this’ will be the DslFactory. That’s where we get our readFileFromWorkspace implementation.

And that’s it. We have extracted a part of our job which is re-occurring and we found our implementation for our readFileFromWorkspace. DslFactory has most of the things which you need in a job description, would you want to expand on this and extract other bits and pieces.

Have fun, and happy coding!
As always,
Thanks for reading!
Gergely.

Circular buffer in Go

I’m proud of this one too. No peaking. I like how go let’s you do this kind of stuff in a very nice way.

package circular
 
import "fmt"
 
//TestVersion testVersion
const TestVersion = 1
 
//Buffer buffer type
type Buffer struct {
    buffer []byte
    full   int
    size   int
    s, e   int
}
 
//NewBuffer creates a new Buffer
func NewBuffer(size int) *Buffer {
    return &Buffer{buffer: make([]byte, size), s: 0, e: 0, size: size, full: 0}
}
 
//ReadByte reads a byte from b Buffer
func (b *Buffer) ReadByte() (byte, error) {
    if b.full == 0 {
        return 0, fmt.Errorf("Danger Will Robinson: %s", b)
    }
    readByte := b.buffer[b.s]
    b.s = (b.s + 1) % b.size
    b.full--
    return readByte, nil
}
 
//WriteByte writes c byte to the buffer
func (b *Buffer) WriteByte(c byte) error {
    if b.full+1 > b.size {
        return fmt.Errorf("Danger Will Robinson: %s", b)
    }
    b.buffer[b.e] = c
    b.e = (b.e + 1) % b.size
    b.full++
    return nil
}
 
//Overwrite overwrites the oldest byte in Buffer
func (b *Buffer) Overwrite(c byte) {
    b.buffer[b.s] = c
    b.s = (b.s + 1) % b.size
}
 
//Reset resets the buffer
func (b *Buffer) Reset() {
    *b = *NewBuffer(b.size)
}
 
//String for a string representation of Buffer
func (b *Buffer) String() string {
    return fmt.Sprintf("Buffer: %d, %d, %d, %d", b.buffer, b.s, b.e, b.size)
}

DataMunger Kata with Go

Quickly wrote up the Data Munger code kata in Go.

Next time, I want better abstractions. And a way to select columns based on their header data. For now, this is not bad.

package main
 
import (
	"bufio"
	"fmt"
	"log"
	"math"
	"os"
	"regexp"
	"strconv"
	"strings"
)
 
//Data which is Data
type Data struct {
	columnName string
	compareOne float64
	compareTwo float64
}
 
func main() {
	// datas := []Data{WeatherData{}, FootballData{}}
	fmt.Println("Minimum weather data:", GetDataMinimumDiff("weather.dat", 0, 1, 2))
	fmt.Println("Minimum football data:", GetDataMinimumDiff("football.dat", 1, 6, 7))
}
 
//GetDataMinimumDiff gathers data from file to fill up Columns.
func GetDataMinimumDiff(filename string, nameColumn int, compareColOne int, compareColTwo int) Data {
	data := Data{}
	minimum := math.MaxFloat64
	readLines := ReadFile(filename)
	for _, value := range readLines {
		valueArrays := strings.Split(value, ",")
		name := valueArrays[nameColumn]
		trimmedFirst, _ := strconv.ParseFloat(valueArrays[compareColOne], 64)
		trimmedSecond, _ := strconv.ParseFloat(valueArrays[compareColTwo], 64)
		diff := trimmedFirst - trimmedSecond
		diff = math.Abs(diff)
		if diff <= minimum {
			minimum = diff
			data.columnName = name
			data.compareOne = trimmedFirst
			data.compareTwo = trimmedSecond
		}
	}
	return data
}
 
//ReadFile reads lines from a file and gives back a string array which contains the lines.
func ReadFile(fileName string) (fileLines []string) {
	file, err := os.Open(fileName)
	if err != nil {
		log.Fatal(err)
	}
	defer file.Close()
 
	scanner := bufio.NewScanner(file)
	//Skipping the first line which is the header.
	scanner.Scan()
	for scanner.Scan() {
		line := scanner.Text()
		re := regexp.MustCompile("\\w+")
		lines := re.FindAllString(line, -1)
		if len(lines) > 0 {
			fileLines = append(fileLines, strings.Join(lines, ","))
		}
	}
 
	if err := scanner.Err(); err != nil {
		log.Fatal(err)
	}
 
	return
}

How to Aggregate Tests with Jenkins with Aggregate Plugin on non-relating jobs

Hello folks.

Today, I would like to talk about something I came in contact with, and was hard to find a proper answer / solution for it.

So I’m writing this down to document my findings. Like the title says, this is about aggregating test result with Jenkins, using the plug-in provided. If you, like me, have a pipeline structure which do not work on the same artifact, but do have a upstream-downstream relationship, you will have a hard time configuring and making Aggregation work. So here is how, I fixed the issue.

Connection

In order for the aggregation to work, there needs to be an artifact connection between the upstream and downstream projects. And that is the key. But if you don’t have that, well, let’s create one. I have a parent job configured like this one. =>

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties/>
  <scm class="hudson.scm.NullSCM"/>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders/>
  <publishers>
    <hudson.tasks.test.AggregatedTestResultPublisher plugin="junit@1.9">
      <includeFailedBuilds>false</includeFailedBuilds>
    </hudson.tasks.test.AggregatedTestResultPublisher>
    <hudson.tasks.BuildTrigger>
      <childProjects>ChildJob</childProjects>
      <threshold>
        <name>SUCCESS</name>
        <ordinal>0</ordinal>
        <color>BLUE</color>
        <completeBuild>true</completeBuild>
      </threshold>
    </hudson.tasks.BuildTrigger>
  </publishers>
  <buildWrappers/>
</project>

As you can see, it’s pretty basic. It isn’t much. It’s supposed to be a trigger job for downstream projects. You could have this one at anything. Maybe scheduled, or have some kind of gathering here of some results, and so on and so forth. The end part of the configuration is the interesting bit.

parentJobConfig

Aggregation is setup, but it won’t work, because despite there being an upstream/downstream relationship, there also needs to be an artifact connection which uses fingerprinting. Fingerprinting for Jenkins is needed in oder to make the physical connection between the jobs via hashes. This is what you will get if that is not setup:

statusBeforeFingerprint

But if there is no artifact between them, what do you do? You create one.

The Artifact which Binds Us

Adding a simple timestamp file is enough to make a connection. So let’s do that. This is how it will look like =>

statusAfterFingerprint

The important bits about this picture are the small echo which simply creates a file which will contain some time stamp data, and after that the archive artifact, which also fingerprints that file, marking it with a hash which identifies this job as using that particular artifact.

Now, the next step is to create the connection. For that, you need the artifact copy plugin => Copy Artifact Plugin.

With this, we create the childs configuration like this:

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties/>
  <scm class="hudson.plugins.git.GitSCM" plugin="git@2.4.0">
    <configVersion>2</configVersion>
    <userRemoteConfigs>
      <hudson.plugins.git.UserRemoteConfig>
        <url>https://github.com/Skarlso/DataMung.git</url>
      </hudson.plugins.git.UserRemoteConfig>
    </userRemoteConfigs>
    <branches>
      <hudson.plugins.git.BranchSpec>
        <name>*/master</name>
      </hudson.plugins.git.BranchSpec>
    </branches>
    <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
    <submoduleCfg class="list"/>
    <extensions/>
  </scm>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders>
    <hudson.plugins.gradle.Gradle plugin="gradle@1.24">
      <description></description>
      <switches></switches>
      <tasks>assemble check</tasks>
      <rootBuildScriptDir></rootBuildScriptDir>
      <buildFile>build.gradle</buildFile>
      <gradleName>(Default)</gradleName>
      <useWrapper>true</useWrapper>
      <makeExecutable>false</makeExecutable>
      <fromRootBuildScriptDir>true</fromRootBuildScriptDir>
      <useWorkspaceAsHome>false</useWorkspaceAsHome>
    </hudson.plugins.gradle.Gradle>
    <hudson.plugins.copyartifact.CopyArtifact plugin="copyartifact@1.36">
      <project>ParentJob</project>
      <filter>timestamp.data</filter>
      <target></target>
      <excludes></excludes>
      <selector class="hudson.plugins.copyartifact.TriggeredBuildSelector">
        <upstreamFilterStrategy>UseGlobalSetting</upstreamFilterStrategy>
      </selector>
      <doNotFingerprintArtifacts>false</doNotFingerprintArtifacts>
    </hudson.plugins.copyartifact.CopyArtifact>
  </builders>
  <publishers>
    <hudson.tasks.junit.JUnitResultArchiver plugin="junit@1.9">
      <testResults>build/test-results/*.xml</testResults>
      <keepLongStdio>false</keepLongStdio>
      <healthScaleFactor>1.0</healthScaleFactor>
    </hudson.tasks.junit.JUnitResultArchiver>
  </publishers>
  <buildWrappers>
    <hudson.plugins.ws__cleanup.PreBuildCleanup plugin="ws-cleanup@0.28">
      <deleteDirs>false</deleteDirs>
      <cleanupParameter></cleanupParameter>
      <externalDelete></externalDelete>
    </hudson.plugins.ws__cleanup.PreBuildCleanup>
  </buildWrappers>
</project>

Again, the improtant bit is this:
childJobWithArtifactCopy

After the copy is setup, we launch our parent job and if everything is correct, you should see something like this:

resultAfterArtifactCopySuccess

Wrapping it Up

For final words, important bit to take away from this is that you need an artifact connection between the jobs to make this work. Whatever your downstream / upstream connection is, it doesn’t matter. Also, there can be a problem that you have everything set up, and there are artifacts which bind the jobs together but you still can’t see the results, then your best option is to specify the jobs BY NAME in the aggregate test plug-in like this:

aggregatingByName

I know this is a pain if there are multiple jobs, but at least, jenkins is providing you with Autoexpande once you start typing.

Of course this also works with multiple downstream jobs if they copy the artifact to themselves.

Any questions, please feel free to comment and I will answer to the best of my knowledge.

Cheers,
Gergely.

I used to have great ideas on the toilet, but I no longer do.

I used to have great ideas on the toilet, but I no longer do. And I would like to reflect on that. So this is not going to be a technical post, rather some ramblings.

I already had a post similar to this one, but I failed to follow up on it, and now I’m re-visiting the question. With technology on the rise, embedded systems, chips, augmented biology and information being available at our fingertips, I have but one concern. I don’t want to sound like an old guy reflecting on history, that now everything is changing and that we need to have a sight on the past and bla bla bla. I do have one valid concern though. We are in danger of loosing ourselves.

With mobile phones and the Internet always being around and available, we are in danger of loosing our thoughts and ideas, our individuality and our THINKING. We are reading news, posts, advancements, blogs, vlogs, and the dreaded 9gag. I am one of these people. I read 9gag. And I hate myself for it. It’s an immediate satisfaction and gain of euphoria and a way of shutting my brain down when it needs it. But I caught myself doing it one or more times when I should have read something more important or beneficial at least. Or catch up on a blog post or read a news, or Gods forbid just plain sit around and THINK for a little while.

So my previous post around this topic was to leave out technology from your life’s for a short period of time. This is the same. Have some alone time. Reflect. Write a diary. If you are a technical person, write down ideas you would want to create. If you don’t have any, write out bigger ones. For example, I want to write an RPG. Or That I want to learn how to do metaprogramming the proper way. Or that I want to read up on some Russian sciences fiction. There are SOOOO many things in the world. Don’t waste it on bullshit and immediate serotonin generating content, like frigging cats! When you do it, when your are at it, stop for a little bit, and think. THINK. What are you doing? Why are you reading up on that crap? What merit does it have?

I understand that from time to time you need to shut off. You need a little bit of comfort. A little bit of serotonin in your system. There are better ways of achieving that. Go for a walk. Run. Bike. Eat a chocolate while staring out of a window. Read a comic book. Do random acts of kindness (not kidding). Drink a glass of water. Listen to some awesome music while drawing something ( anything, it doesn’t have to be a masterpiece! ). Sit back and listen to some music. Talk to a loved one. Talk to a friend. Talk to yourself (again, not kidding). If you have a pet, go play with it.

So I have a little challenge here as well -it would be a reflective post if I didn’t have any-, do not bring any electronic devices to the toilet. Or if you bring one, the rule is to turn on Airplane mode. I used to have great ideas on the toilet because I didn’t used to watch stuff on my phone. I used to be by myself with my thoughts. I have a family so there is very little time or space to be alone and with my thoughts. And then when I had the chance, I was browsing on my phone, which again, effectively, led to not being alone with my thoughts.

There you have it. This is my little rant about technology and thinking.

Thanks for reading,
And as always,
Have a nice day.
Gergely.