Migrating my blog from Jekyll Minimal Mistakes to Mkdocs Material
After using Jekyll Minimal Mistakes for years, I decided to migrate my blog to MkDocs Material, as it's written in Python and I'm more familiar with it.
After using Jekyll Minimal Mistakes for years, I decided to migrate my blog to MkDocs Material, as it's written in Python and I'm more familiar with it.
Python local version identifiers are used to distinguish between different builds of the same version of a package. They are used to indicate that a package has been modified in some way from the original source code, but should still be considered the same version.
During CICD, we often have a large log output, it might be nice to have some common scripts to help us to format the log output, so that we can easily find the information we need.
Recently, when working with Sonar, I found that they have some scripts for such output formatting.
Although Azure provides already a GitHub Actions for Azure Web App to deploy static files to Azure Web App, but we can also do it ourselves with a azure cli command.
Bash shell in Github actions by default is run with -e -o pipefail
option. The full command used by Github actions is :
During CI/CD processes, and particularly during CI, we frequently hash dependency files to create cache keys (referred to as key
input in Github Action actions/cache and key
parameter in Azure pipelines Cache@2 task). However, the default hash functions come with certain limitations like this comment. To address this, we can use the following pure Bash shell command to manually generate the hash value.
Recently, I began a new project that requires migrating some process from Azure Pipelines to Github Actions. One of the tasks involves retrieving secrets from Azure Key Vault.
In Azure Pipelines, we have an official task called AzureKeyVault@2 designed for this purpose. However, its official counterpart in Github Actions, Azure/get-keyvault-secrets@v1, has been deprecated. The recommended alternative is Azure CLI. While Azure CLI is a suitable option, it operates in a bash shell without multithreading. If numerous secrets need to be fetched, this can be time-consuming.
Before the Databricks Unit Catalog's release, we used init scripts stored in DBFS to generate the pip.conf
file during cluster startup, allowing each cluster its unique auth token. But with init scripts no longer available in the Unit Catalog's shared mode, an alternative approach is required.