Fuzzball Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Adding Dependant Jobs and Volumes

Let’s expand on the previous example by creating a second job that will run after the first job. In our example, the second job should take the fortune generated by the fortune command in the first job and reformat using the command cowsay.

To accomplish this, we need a place to save the fortune created by the first job and to read it in to the second job. So in addition to adding a second job, we will create an ephemeral Volume that enables sharing data between jobs within a workflow.

This example requires that you are able to create ephemeral volumes. If you cannot do so, you may need to contact your administrator to gain the appropriate permissions.

You can open the workflow that we created in the last example in the Workflow Editor. If it is not already open you can navigate to the Workflows tab (on the left), find the workflow that you executed in the last example containing the fortune command, open it in the workflow dashboard, and select “Open in Workflow Editor” in the upper right.

workflow dashboard with open in workflow editor button highlighted

First, we need to save the output from our fortune command somewhere. You can start by clicking on the job so that it is highlighted green and the job configuration menu opens on the right hand side. You can edit the job so that standard output is redirected to a file called /tmp/fortune.txt.

The command should be this:

fortune >/tmp/fortune.txt

new command sending fortune standard output to text file

Don’t forget to save your changes!

This is a good start, but /tmp is specific to this container and will not persist between jobs. So we need to set up a volume that the jobs (when we have more than one) can share.

You can start by highlighting the vertical “Volumes” tab and clicking “Add a Volume” like so:

volumes tab on job config menu

Now you can create a new ephemeral volume for the jobs in this workflow to use. In our case, we already have a Storage Class called “ephemeral” (set up by our administrator) that we can choose in the drop-down menu. Here you can see that we’ve named our new Volume testVolume and scoped it for use by the User (us).

volume configuration for fortune and cowsay workflow

Save the changes by pressing the button in the lower right of the menu.

Now you can head back to the vertical Jobs tab and configure the fortune job to access the newly configured Volume!

Under Environment, click the Add Mounted Volume button.

menu showing location of add mounted volume button

Now select the testVolume we just created in the drop down menu and tell Fuzzball to mount it at /tmp (the Absolute Path).

job configuration showing how to mount an ephemeral volume

After saving these changes, the job fortune will write its output to an ephemeral volume (that can be used by other jobs) instead of just writing to /tmp within the container.

Great! Now you are ready to add a second dependent job that will take the data written by the fortune job and further process it. You can start by pressing the small button labeled with a plus sign and dragging and dropping a new job into place. We’ll call it cowsay. Once you’ve named it, you can draw a line from the fortune job to the new cowsay job to indicate the cowsay is dependant on fortune.

workflow grid now shows a second job dependent on the first

Now configure the cowsay job with the correct command to read the data from /tmp/fortune.txt and pipe the output into the Cowsay program. The following should do the trick:

cat /tmp/fortune.txt | cowsay

job menu with cat file pipe cowsay command

Just as before, you can click on the Environment tab. Set this job to run in the same container since the lolcow container has both the Fortune and Cowsay programs. And you can also make sure that the testVolume is Mounted at /tmp.

cowsay job environment configuration

To finish the workflow, you can allocate resources for the cowsay job. A single CPU and 1GB memory is plenty.

After saving your changes, you can go back to the vertical Volumes tab. If you float your pointer over the testVolume Volume that we configured for this job, you will see the jobs that are using this volume highlighted.

jobs using volume highlighted in the workflow grid

Now you can submit the workflow as before by clicking in the triangular button in the lower right of the workflow editor grid and optionally naming it. After it completes you should see something like this.

finished workflow showing logs from cowsay

As in the last section, you can see the Fuzzfile at any time from the Workflow Editor by clicking the ellipsis menu in the lower right of the workflow grid and selecting “Edit YAML” or by pressing “e” on your keyboard. You can also view the Fuzzfile by clicking on the “Definition” tab in the “Workflows” dashboard. Now that we’ve added an additional job and an ephemeral volume for the jobs to share, the Fuzzfile that is generated by the Workflow Editor looks a bit more complicated than it did in the last section.

version: v1
jobs:
  cowsay:
    image:
      uri: oras://godlovedc/lolcow:sif
    mounts:
      testVolume:
        location: /tmp
    command:
      - /bin/sh
      - '-c'
      - cat /tmp/fortune.txt | cowsay
    requires:
      - fortune
    resource:
      cpu:
        cores: 1
      memory:
        size: 1GB
  fortune:
    image:
      uri: oras://godlovedc/lolcow:sif
    mounts:
      testVolume:
        location: /tmp
    command:
      - /bin/sh
      - '-c'
      - fortune >/tmp/fortune.txt
    resource:
      cpu:
        cores: 1
      memory:
        size: 1GB
volumes:
  testVolume:
    reference: volume://user/ephemeral
If you want to replicate this or any of the workflows in these examples, but you don’t want to manually recreate them using the Workflow Editor, you can always copy and paste this text into a file and open the file in the Workflow Editor. Or you can just press “e” to open the text editor window in the Workflow Editor and paste in this text!

In the next section we will add a few more jobs that can run in parallel and show how data ingress and egress allow you to import and export data to your workflow.