Hello vComunnity,

The other day I was working on a vSphere with Tanzu deployment which required the download of tgk images from a synced content library since my environment did not have access to the internet I was not able to get this done, I tried creating the content library and upload the files manually but it did not work either, so I decided to create my own content library.

I did not find much info about doing this other than vSphere 7 and Workload Management on homelab | myupbeat blog where this approach was also used.

In order for your guys to have a better understanding of how this is performed, I decided to walk you through the process of creating a local Linux based content library and subscribe to it from vCenter.


  • Linux based VM – For this, I am using Ubuntu Server 20.4.02
  • Files from the content library (I’ll use the tgk images I want for my vSphere with Tanzu deployment)
    • https://wp-content.vmware.com/v2/latest/

Let’s download the remote content library files, for this,

Option # 1 Using myupbeat’s script:

#!/bin/bash -x
mkdir -p ${CONTENT_FOLDER}
wget ${CONTENT_URL}/items.json -O ${CONTENT_FOLDER}/items.json
wget ${CONTENT_URL}/lib.json -O ${CONTENT_FOLDER}/lib.json
FOLDERS=$(cat ${CONTENT_FOLDER}/items.json | jq -r '.items[] | .name')
for f in ${FOLDERS}; do
  if [[ ! -d "${CONTENT_FOLDER}/${f}" ]]; then
    mkdir -p ${CONTENT_FOLDER}/${f}
  pushd ${CONTENT_FOLDER}/${f}
    if [[ ! -f "item.json" ]]; then
      wget ${CONTENT_URL}/$f/item.json -O item.json
    FILES=$(cat item.json | jq -r '.files[] | .name')
    for file in ${FILES}; do
      if [[ ! -f "${file}" ]]; then
        wget ${CONTENT_URL}/$f/$file -O $file

Just added the -p in (mkdir -p ${CONTENT_FOLDER}) without it it would not create the folder initially, also, make sure to run it as root or with sudo privileges so it can work.

Make sure to install any dependency that your system might need, for example, the jq command:

++ jq -r '.items[] | .name'
./cldownload.sh: line 10: jq: command not found

The output will be like this, just wait for it to complete.

Option # 2 is using wget to grab the all files.

wget -r --no-parent https://wp-content.vmware.com/v2/latest/

Depending on what type of content you are copying and also the network speed, it could take some minutes to fully download all of its content.

Once the files were downloaded, it is time to configure our Nginx service to make these files available.

Modify the following file /etc/nginx/conf.d/default.conf or /etc/nginx/sites-available/default (Ubuntu)


server {
        listen 80 default_server;
        listen [::]:80 default_server;

        # SSL configuration
        # listen 443 ssl default_server;
        # listen [::]:443 ssl default_server;
        # Note: You should disable gzip for SSL traffic.
        # See: https://bugs.debian.org/773332
        # Read up on ssl_ciphers to ensure a secure configuration.
        # See: https://bugs.debian.org/765782
        # Self signed certs generated by the ssl-cert package
        # Don't use them in a production server!
        # include snippets/snakeoil.conf;

        root /var/www/html;

        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;


server {
  listen 9090 default_server;
  listen [::]:9090 default_server;

  root /data;

  index index.html index.htm index.nginx-debian.html;

  server_name _;

  location / {
    autoindex on;
    try_files $uri $uri/ =404;

I used Option # 1 to download the files so I am referring to the /data folder in my Nginx configuration file, it will also use port 9090 to allow the HTTP connection.

Check if the Nginx service is running by going to the URL http://<CL-IP-or-FQDN>:9090/ you’ll see an output similar to this:

Alright, time to create a content library subscribe to it on vCenter.

If you want to know more about this process, feel free to visit Managing Content Library Subscriptions – Subscribing to a Content Library
Menu -> Content Libraries -> New Content Lirarby (+)

Subscribed content library, use your local’s Linux based CL URL followed by the lib.json file location


This content library was hosting about 22GB in files, so I selected the option to download them immediately,

Once created, in my case, it will start syncing and downloading the files,

Once done, you’ll be able to start using your content library.

Hope this was a cool post for you guys to know,

Do now hesitate in contacting me if you have any comments,


Buy me a coffeeBuy me a coffee