In my last blog post we used a Jupyter notebook to create a custom report from a Jira server. It turns out connectivity to Atlassian apps doesn’t stop at Jira. Today we are going to use the same script creation and execution process to create bulk repositories in a Fisheye / Crucible server.

Let’s begin!

What do you need to get going?

The list of prerequisites you need to complete this task is the same as my last blog post. Let’s quickly reiterate:

  • Docker or Docker CE for Mac, Windows or Linux: https://www.docker.com/get-started
  • A Docker Hub account (I believe this is now required to download Docker for Mac or Windows)

Like before, that’s it!

How to get going with Jupyter Notebooks?

There are TONS of great web pages and Youtube videos on Jupyter Notebook basics. I encourage everyone to Google around!

For our purposes, all you need to do is fire up a Docker image and your web brower.

Enter the following from the command line:

root@gdaymate4:/home/mmarch/jirabook# docker run --rm -p 10000:8888 -e JUPYTER_ENABLE_LAB=yes -v "$PWD":/home/jovyan jupyter/scipy-notebook

… and then connect to the URL that is sent to the command console.

When you login you’ll see these options:

Make sure you select the Python 3 Notebook option.

Create a Google doc for the input.

To create repositories bulk, we need an input source. If you create a Google sheet, you can have a source of data that can be accessed remotely and you’ll be able to update it quickly. You’ll need to create a sheet with this format:

You’ll also need to make sure the sheet can be accessed without authentication:

Let’s restart the note-booking engine!

First, we need to get the sheet from the Google Sheet. On the Jupyter platform, click in the first cell and run shell commands by putting an “!” in front of the command. You’ll replace your Google doc identification hash with what is in the below wget command:

!wget --output-document=fecru.csv "https://docs.google.com/spreadsheets/d/1W3eANhfByU8JDdC7o4LgDPwUm4TuOhKFYBn5gKyHwA8/export?gid=0&format=csv&id=1W3eANhfByU8JDdC7o4LgDPwUm4TuOhKFYBn5gKyHwA8"

The output:

--2019-04-08 06:46:29--  https://docs.google.com/spreadsheets/d/1W3eANhfByU8JDdC7o4LgDPwUm4TuOhKFYBn5gKyHwA8/export?gid=0&format=csv&id=1W3eANhfByU8JDdC7o4LgDPwUm4TuOhKFYBn5gKyHwA8
Resolving docs.google.com (docs.google.com)... 172.217.4.174, 2607:f8b0:4007:803::200e
Connecting to docs.google.com (docs.google.com)|172.217.4.174|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified 
Saving to: ‘fecru.csv’
 
fecru.csv               [ <=>                ]     185  --.-KB/s    in 0s     
 
2019-04-08 06:46:30 (10.1 MB/s) - ‘fecru.csv’ saved [185]

Next, import all the modules you want to use:

import requests
import json
from requests.auth import HTTPBasicAuth

Next, we’ll create a template function to make a FeCru repo JSON payload for each repository created.

def make_json (display_name, host, path, port, uid, password):
    payload='''
    {
      "type": "p4",
      "name": "%s",
      "displayName": "%s",
      "description": "",
      "storeDiff": true,
      "enabled": true,
      "p4": {
        "host": "%s",
        "path": "%s",
        "port": %s,
        "auth": {
          "username": "%s",
          "password": "%s"
        }
      }
    }
    ''' % (display_name, display_name, host, path, port, uid, password)
    return(payload)

Set the FeCru endpoint(s) and CSV file name to be ingested by the script.

base_url = 'http://fecru.teamsinspace.com:8060'
csv_file = 'fecru.csv'
 
print (base_url+"/rest-service-fecru/admin/repositories")

Dump the repo list BEFORE we add more p4 repos.

r = requests.get(base_url+"/rest-service-fecru/admin/repositories",  auth=('mdavis', 'Charlie!'))
print (json.dumps(json.loads(r.text), indent=2))

The JSON result:

{
  "start": 0,
  "limit": 100,
  "lastPage": true,
  "size": 2,
  "values": [
    {
      "type": "git",
      "name": "apollo-ui",
      "displayName": "apollo-ui",
      "description": "",
      "storeDiff": true,
      "enabled": false,
      "git": {
        "location": "http://admin@bitbucket.teamsinspace.com:7990/scm/tis/apollo-ui.git",
        "path": "",
        "auth": {
          "authType": "password"
        },
        "renameDetection": "none"
      }
    },
    {
      "type": "git",
      "name": "website",
      "displayName": "website",
      "description": "",
      "storeDiff": true,
      "enabled": false,
      "git": {
        "location": "http://admin@bitbucket.teamsinspace.com:7990/scm/tis/website.git",
        "path": "",
        "auth": {
          "authType": "password"
        },
        "renameDetection": "none"
      }
    }
  ]
}

Create repositories from the CSV file:

import csv
headers = {'Content-type': 'application/json'}
 
with open(csv_file) as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        json_data = (make_json(row['display_name'] ,row ['host'],row ['path'],row ['port'],row ['uid'],row ['password']))
        print (json_data)
        r=requests.post(base_url+"/rest-service-fecru/admin/repositories",  headers=headers, data=json_data, auth=('mdavis', 'Charlie!'))
        print ("Response: ", r.status_code)

.. and print the output:

{
      "type": "p4",
      "name": "foo",
      "displayName": "foo",
      "description": "",
      "storeDiff": true,
      "enabled": true,
      "p4": {
        "host": "perforce.example.com",
        "path": "../foo/",
        "port": 3306,
        "auth": {
          "username": "foo",
          "password": "bar"
        }
      }
    }
     
Response:  201
 
    {
      "type": "p4",
      "name": "foo2",
      "displayName": "foo2",
      "description": "",
      "storeDiff": true,
      "enabled": true,
      "p4": {
        "host": "perforce.example.com",
        "path": "../foo/",
        "port": 3306,
        "auth": {
          "username": "foo",
          "password": "bar"
        }
      }
    }
     
Response:  201

Finally, you can see the new repos in Fisheye: