Build a CI/CD Pipeline with Kubernetes and Rancher 2.0
Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.Watch the training
Deploying JFrog Artifactory with Rancher, Part One
JFrog Artifactory is a universal artifact repository that supports all major packaging formats, build tools and continuous integration (CI) servers. It holds all of your binary content in a single location and presents an interface that makes it easy to upload, find, and use binaries throughout the application development and delivery process.
In this article we’ll walk through using Rancher to deploy and manage JFrog Artifactory on a Kubernetes cluster. When you have finished reading this article, you will have a fully functional installation of JFrog Artifactory OSS, and you can use the same steps to install the OSS or commercial version of Artifactory in any other Kubernetes cluster. We’ll also show you how to create a generic repository in Artifactory and upload artifacts into it.
Artifactory has many more features besides the ones presented in this article, and a future article will explore those in greater detail.
Let’s get started!
This article uses the following software:
- Rancher v2.0.8
- Kubernetes cluster running on Google Kubernetes Engine version 1.10.7-gke.2
- Artifactory helm chart version 7.4.2
- Artifactory OSS version 6.3.2
If you’re working through the article at a future date, please use the versions current for that time.
As with all things Kubernetes, there are multiple ways to install Artifactory. We’re going to use the Helm chart. Helm provides a way to package application installation instructions and share them with others. You can think of it as a package manager for Kubernetes. Rancher integrates with Helm via the Rancher Catalog, and through the Catalog you can deploy any Helm-backed application with only a few clicks. Rancher has other features, including:
- an easy and intuitive web interface
- the ability to manage Kubernetes clusters deployed anywhere, on-premise or with any provider
- a single view into all managed clusters
- out of the box monitoring of the clusters
- workload, role-based access control (RBAC), policy and project management
- all the power of Kubernetes without the need to install any software locally
NOTE: If you already have a Rancher v2 server and Kubernetes cluster installed, skip ahead to the section titled Installing JFrog Artifactory.
We’re proud of Rancher’s ability to manage Kubernetes clusters anywhere, so we’re going to launch a Rancher Server in standalone mode on a GCE instance and use it to deploy a Kubernetes cluster in GKE.
Spinning up a Rancher Server in standalone mode is easy – it’s a Docker container. Before we can launch the container, we’ll need a compute instance on which to run it. Let’s launch that with the following command:
gcloud compute --project=rancher-20 instances create rancher-instance \ --zone=europe-west2-c \ --machine-type=g1-small \ --tags=http-server,https-server \ --image=ubuntu-1804-bionic-v20180911 \ --image-project=ubuntu-os-cloud
Please change the
zone parameters as appropriate for your deployment.
After a couple of minutes you should see that your instance is ready to go.
Created [https://www.googleapis.com/compute/v1/projects/rancher-20/zones/europe-west2-c/instances/rancher-instance]. NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS rancher-instance europe-west2-c g1-small 10.154.0.2 220.127.116.11 RUNNING
Make a note of the
EXTERNAL_IP address, as you will need it in a moment to connect to the Rancher Server.
With the compute node up and running, let’s use the GCE CLI to SSH into it.
gcloud compute ssh \ --project "rancher-20" \ --zone "europe-west2-c" \ "rancher-instance"
Again, be sure that you adjust the
zone parameters to reflect your instance if you launched it in a different zone or with a different name.
Once connected, run the following commands to install some prerequisites and then install Docker CE. Because the Rancher Server is a Docker container, we need Docker installed in order to continue with the installation.
sudo apt-get update sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get -y install docker-ce
With that out of the way, we’re ready to deploy the Rancher Server. When we launch the container for the first time, the Docker Engine will fetch the container image from Docker Hub and store it locally before launching a container from it. Future launches of the container, should we need to relaunch it, will use the local image store and be much faster.
Use the next command to instruct Docker to launch the Rancher Server container and have it listen on port 80 and 443 on the host.
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /<host_path>:/var/lib/rancher rancher/rancher:v2.0.8
If nothing goes awry, Docker will print the download status and then the new container ID before returning you to a prompt.
Unable to find image 'rancher/rancher:latest' locally latest: Pulling from rancher/rancher 124c757242f8: Pull complete 2ebc019eb4e2: Pull complete dac0825f7ffb: Pull complete 82b0bb65d1bf: Pull complete ef3b655c7f88: Pull complete 437f23e29d12: Pull complete 52931d58c1ce: Pull complete b930be4ed025: Pull complete 4a2d2c2e821e: Pull complete 9137650edb29: Pull complete f1660f8f83bf: Pull complete a645405725ff: Pull complete Digest: sha256:6d53d3414abfbae44fe43bad37e9da738f3a02e6c00a0cd0c17f7d9f2aee373a Status: Downloaded newer image for rancher/rancher:latest 454aa51a6f0ed21cbe47dcbb20a1c6a5684c9ddb2a0682076237aef5e0fdb3a4
Congratulations! You’ve successfully launched a Rancher Server instance.
EXTERNAL_IP address that you saved above and connect to that address in a browser. You’ll be asked to accept the self-signed certificate that Rancher installs by default. After this, you’ll be presented with the welcome screen. Set a password (and remember it!), and continue to the next page.
On this page you’re asked to set the URL for the Rancher Server. In a production deployment this would be a hostname like
rancher.yourcompany.com, but if you’re following along with a demo server, you can use the
EXTERNAL_IP address from above.
When you click Save URL on this page, you’ll be taken to the Clusters page, and from there we’ll deploy our Kubernetes cluster.
Using Rancher to Deploy a GKE Cluster
Rancher can deploy and manage Kubernetes clusters anywhere. They can be in Google, Amazon, Azure, on cloud nodes, in datacenters, or even running in a VM on your laptop. It’s one of the most powerful features of the product. For today we’ll be using GKE, so after clicking on Add Cluster, choose Google Container Engine as your provider.
Set the name to something appropriate for this demo, like
In order to create the cluster, Rancher needs permission to access the Google Cloud Platform. Those permissions are granted via a Service Account private key JSON file. To generate that, first find the service account name (replace the
project name with yours if necessary):
gcloud iam service-accounts list --project rancher-20 NAME EMAIL Compute Engine default service account <SA>-firstname.lastname@example.org
The output will have a service account number in place of
<SA>. Copy this entire address and use it in the following command:
gcloud iam service-accounts keys create ./key.json \ --iam-account <SA>-email@example.com
This will create a file named
key.json in the current directory. This is the Service Account private key that Rancher needs to create the cluster:
You can either paste the contents of that file into the text box, or you can click Read from a file and point it to the
key.json file. Rancher will use this info to generate a page wherein you can configure your new cluster:
Set your preferred
Node Count and
Root Disk Size. The values presented in the above screenshot are sane defaults that you can use for this demo.
When you click Create, the cluster will be provisioned in GKE, and when it’s ready, you’ll see it become active in the UI:
Installing JFrog Artifactory
We’ll install Artifactory by using the Helm chart repository from JFrog. Helm charts, like OS package management systems, give you a stable way to deploy container applications into Kubernetes, upgrade them, or roll them back. The chart guarantees that you’re installing a specific version or tag for the container, and where applications have multiple components, a Helm chart assures that you’re getting the right version for all of them.
Installing the JFrog Helm Repository
Rancher ships with a library of Helm charts in its Application Catalog, but in keeping with the Rancher objective of user flexibility, you can install any third-party Helm repository to have those applications available for deployment in your cluster. We’ll use this today by installing the JFrog repository.
In the Global Cluster view of Rancher click on Catalogs and then click on Add Catalog. In the window that opens, enter a name that makes sense, like jfrog-artifactory and then enter the location of the official JFrog repository.
Click on Create, and the JFrog repository will will appear in the list of custom catalogs.
We’re ready to deploy Artifactory. From the Global view, select the Default project under the jfrog-artifactory cluster:
Once you are inside of the Default project, select Catalog Apps, and then click on Launch. Rancher will show you the apps available for installation from the Application Catalogs. You’ll notice that artifactory-ha shows up twice, once as a partner-provided chart within the default Library of apps that ship with Rancher, and again from the JFrog repository itself. We installed the Helm repository because we want to install the regular, non-HA Artifactory, which is just called artifactory. All catalog apps indicate which library they come from, so in a situation where a chart is present in multiple libraries, you can still choose which to install.
When you select View Details, you have the opportunity to change items about how the application is installed. By default this catalog item will deploy the licensed, commercial version of Artifactory, for which you need a license. If you have a license, then you can leave the default options as they are; however, because we want to install the OSS version, we’re going to change the image that the chart installs.
We do this under the Configuration Options pane, by selecting Add Answer. Set a variable name of
artifactory.image.repository and a value of
Now, when you click Launch, Rancher will deploy Artifactory into your cluster.
When the install completes, the red line will change to green. After this happens, if you click on artifactory, it will present you with the resources that Rancher created for you. In this case, it created three workloads, three services, one volume and one secret in Kubernetes.
If you select Workloads, you will see all of them running:
Resolving a Pending Ingress
At the time of this article’s publication, there is a bug that results in the Ingress being stuck in a Pending state. If you see this when you click on Load Balancing, continue reading for the solution.
To resolve the pending Ingress, we need to create the Service to which the Ingress is sending traffic. Click Import YAML in the top right, and in the window that opens, paste the following information and then click Import.
apiVersion: v1 kind: Service metadata: labels: app: artifactory chart: artifactory-7.4.2 component: nginx heritage: Tiller io.cattle.field/appId: artifactory release: artifactory name: artifactory-artifactory-nginx namespace: artifactory spec: externalTrafficPolicy: Local ports: - name: nginxhttp port: 80 protocol: TCP targetPort: 80 - name: artifactoryhttps port: 443 protocol: TCP targetPort: 443 selector: app: artifactory component: nginx release: artifactory sessionAffinity: None type: LoadBalancer
The Workloads pane will now show clickable links for ports
80/tcp under the
When you select
443/tcp, it will open the Artifactory UI in a new browser tab. Because it’s using a self-signed certificate by default, your browser may give you a warning and ask you to accept the certificate before proceeding.
Taking Artifactory for a Spin
You now have a fully-functional binary artifact repository available for use. That was easy! Before you can start using it, it needs a tiny bit of configuration.
First, set an admin password in the wizard. When it asks you about the proxy server, select Skip unless you’ve deployed this in a place that needs proxy configuration. Create a generic repository, and select Finish.
Now, let’s do a quick walkthrough of some basic usage.
First, we’ll upload the helm chart that you used to create the Artifactory installation.
Select Artifacts from the left-side menu. You will see the generic repository that you created above. Choose it, and then from the upper right corner, select Deploy. Upload the Helm chart zipfile (or any other file) to the repository.
After the deploy finishes, you will see it in the tree under the repository.
Although this is a simple test of Artifactory, it demonstrates that it can already can be used in its full capacity.
You’re all set to use Artifactory for binary artifact storage and distribution and Rancher for easy management of the workloads, the cluster, and everything related to the deployment itself.
If you’ve gone through this article as a demo, you can delete the Kubernetes cluster from the Global Cluster view within Rancher. This will remove it from GKE. After doing so, you can delete the Rancher Server instance directly from GCE.
JFrog Artifactory is extremely powerful. More organizations use it every day, and being able to deploy it quickly and securely into a Kubernetes cluster is useful knowledge.
According to their own literature, Artifactory empowers you to “release fast or die.” Similarly, Rancher allows you to deploy fast while keeping control of the resources and the security around them. You can build, deploy, tear down, secure, monitor, and interact with Kubernetes clusters anywhere in the world, all from a single, convenient, secure interface.
It doesn’t get much easier than that.