|
| 1 | +# Onboard a Bedrock project to use Service Introspection |
| 2 | + |
| 3 | +If you have already followed the steps |
| 4 | +[here](https://github.com/microsoft/bedrock/tree/master/gitops) to setup the |
| 5 | +pipelines for a GitOps workflow in Bedrock, you may update your pipelines to |
| 6 | +send data to the Spektate storage, which will help you run the introspection |
| 7 | +tool on your services. |
| 8 | + |
| 9 | +## Pre-Requisites |
| 10 | + |
| 11 | +Service introspection tool needs an Azure storage account to store information |
| 12 | +about your pipelines and services. |
| 13 | + |
| 14 | +If you don't already have an Azure storage account you would like to use, use |
| 15 | +the `spk deployment onboard` command which will create a storage account in your |
| 16 | +subscription. |
| 17 | + |
| 18 | +You may also create this storage account manually. You will need to have the |
| 19 | +following properties of this storage before proceeding: |
| 20 | + |
| 21 | +- Name of the storage account |
| 22 | +- Access key to this storage account |
| 23 | +- Table name (this is the table that will store Spektate introspection details) |
| 24 | + |
| 25 | +Once you have a storage account with a table, you may proceed to start updating |
| 26 | +the pipelines to send data to Spektate storage. |
| 27 | + |
| 28 | +## Update the pipelines to send data to storage |
| 29 | + |
| 30 | +1. Create a variable group with the following variables, which will be used by |
| 31 | + the tasks in each of the pipelines to access the storage. |
| 32 | + |
| 33 | + - `ACCOUNT_KEY`: Set this to the access key for your storage account |
| 34 | + - `ACCOUNT_NAME`: Set this to the name of your storage account |
| 35 | + - `PARTITION_KEY`: This field can be a distinguishing key that recognizea |
| 36 | + your source repository in the storage, for eg. in this example, we're using |
| 37 | + the name of the source repository `hello-bedrock` |
| 38 | + - `TABLE_NAME`: Set this to the name of the table in your storage account |
| 39 | + that you prefer to use |
| 40 | + |
| 41 | +  |
| 42 | + |
| 43 | + Make sure that you update the pipelines in the following steps to include |
| 44 | + this variable group, such as below: |
| 45 | + |
| 46 | + ```yaml |
| 47 | + variables: |
| 48 | + - group: <your-variable-group-name> |
| 49 | + ``` |
| 50 | +
|
| 51 | +2. To your CI pipeline that runs from the source repository to build the docker |
| 52 | + image, copy and paste the following task which will update the database for |
| 53 | + every build that runs from the source repository to show up in Spektate. |
| 54 | +
|
| 55 | + ```yaml |
| 56 | + - bash: | |
| 57 | + curl $SCRIPT > script.sh |
| 58 | + chmod +x ./script.sh |
| 59 | + tag_name="hello-spektate-$(Build.SourceBranchName)-$(Build.BuildId)" |
| 60 | + commitId=$(Build.SourceVersion) |
| 61 | + commitId=$(echo "${commitId:0:7}") |
| 62 | + ./script.sh $(ACCOUNT_NAME) $(ACCOUNT_KEY) $(TABLE_NAME) $(PARTITION_KEY) p1 $(Build.BuildId) imageTag $tag_name commitId $commitId service $(Build.Repository.Name) |
| 63 | + displayName: Update manifest pipeline details in CJ db |
| 64 | + env: |
| 65 | + SCRIPT: https://raw.githubusercontent.com/catalystcode/spk/master/scripts/update_introspection.sh |
| 66 | + ``` |
| 67 | +
|
| 68 | + Make sure the variable `tag_name` is set to the tag name for the image being |
| 69 | + built in your docker step. |
| 70 | + |
| 71 | + **Note**: The earlier in the pipeline you add this task, the earlier it will |
| 72 | + send data to Spektate. Adding it before the crucial steps is recommended |
| 73 | + since it will capture details about failures if the important steps fail. |
| 74 | + |
| 75 | +3. To your CD release pipeline (ACR to HLD), add the following lines of code to |
| 76 | + the end of your last release task (make sure this is not a separate task in |
| 77 | + the process): |
| 78 | + |
| 79 | + ```yaml |
| 80 | + latest_commit=$(git rev-parse --short HEAD) |
| 81 | + echo "latest_commit=$latest_commit" |
| 82 | +
|
| 83 | + # Download update storage script |
| 84 | + curl https://raw.githubusercontent.com/catalystcode/spk/master/scripts/update_introspection.sh > script.sh |
| 85 | + chmod +x script.sh |
| 86 | +
|
| 87 | + ./script.sh $(ACCOUNT_NAME) $(ACCOUNT_KEY) $(TABLE_NAME) $(PARTITION_KEY) imageTag $(Build.BuildId) p2 $(Release.ReleaseId) hldCommitId $latest_commit env $(Release.EnvironmentName) |
| 88 | + ``` |
| 89 | + |
| 90 | +4. To the HLD to manifest pipeline, we will need to add two tasks, one that |
| 91 | + updates the storage with the pipeline Id and another with an update for the |
| 92 | + commit Id that was made into the manifest repo. The reason these two are |
| 93 | + currently separate steps is to track more information about failures (if they |
| 94 | + were to happen). For the first step, before the fabrikate steps, add the step |
| 95 | + below: |
| 96 | + |
| 97 | + ```yaml |
| 98 | + - bash: | |
| 99 | + curl $SCRIPT > script.sh |
| 100 | + chmod +x ./script.sh |
| 101 | + commitId=$(Build.SourceVersion) |
| 102 | + commitId=$(echo "${commitId:0:7}") |
| 103 | + ./script.sh $(ACCOUNT_NAME) $(ACCOUNT_KEY) $(TABLE_NAME) $(PARTITION_KEY) hldCommitId $commitId p3 $(Build.BuildId) |
| 104 | + displayName: Update manifest pipeline details in CJ db |
| 105 | + env: |
| 106 | + SCRIPT: https://raw.githubusercontent.com/catalystcode/spk/master/scripts/update_introspection.sh |
| 107 | + ``` |
| 108 | + |
| 109 | + For the step to update manifest commit Id: |
| 110 | + |
| 111 | + ```yaml |
| 112 | + - script: | |
| 113 | + cd "$HOME"/hello-bedrock-manifest |
| 114 | + curl $SCRIPT > script.sh |
| 115 | + chmod +x ./script.sh |
| 116 | + latest_commit=$(git rev-parse --short HEAD) |
| 117 | + ./script.sh $(ACCOUNT_NAME) $(ACCOUNT_KEY) $(TABLE_NAME) $(PARTITION_KEY) p3 $(Build.BuildId) manifestCommitId $latest_commit |
| 118 | + displayName: Update commit id in database |
| 119 | + env: |
| 120 | + SCRIPT: https://raw.githubusercontent.com/catalystcode/spk/master/scripts/update_introspection.sh |
| 121 | + ``` |
| 122 | + |
| 123 | +5. Kick off a full deployment from the source to docker pipeline, and you should |
| 124 | + see some entries coming into the database for each subsequent deployment |
| 125 | + after the tasks have been added! |
0 commit comments