Skip to content

python#

Generating .env file

During local testing, we often need to set environment variables. One way to do this is to create a .env file in the root directory of the project. This file contains key-value pairs of environment variables. For example, a .env file might look like this:

ENV=dev
SECRET=xxx

Hereunder a quick bash script to generate a .env file from a list of Azure KeyVault secrets, same logic can be applied to other secret managers.

#!/bin/bash
set -e

KEY_VAULT_NAME="azure_keyvault_name"

secrets=(
  "secret_name_1"
  "secret_name_2"
  "secret_name_3"
)

az login

# try to get the first secret to check if the user is authenticated before overwriting the .env file
first_secret=$(az keyvault secret show --vault-name "$KEY_VAULT_NAME" --name "${secrets[0]}")

> .env
if ! grep -q "^\.env$" .gitignore; then
  echo ".env" >> .gitignore
  echo ".env added to .gitignore"
fi

for secret in "${secrets[@]}"; do
  value=$(az keyvault secret show --vault-name "$KEY_VAULT_NAME" --name "$secret" | jq .value -r)
  secret_upper=$(echo "$secret" | tr '[:lower:]-' '[:upper:]_')
  echo "${secret_upper}=${value}" >> .env
done

Profiling Python code

Name Scope web framework middleware VSCode Extension
scalene cpu, gpu, memory, duration partially yes
cProfile
(Python native, function level only and cli only)
duration no no
VizTracer duration unknown yes
profyle
(based on Viztracer)
duration yes no
pyinstrument duration yes no
py-spy duration no no
yappi
(cli only)
duration unknown no
austin duration unknown yes

Interesting reading:

Running asyncio task in Databricks

Standard method to run asyncio task is as simple as asyncio.run(main()). But in Databricks, it is not that simple. With the same command, you will get the following error:

import asyncio
async def main():
    await asyncio.sleep(1)
asyncio.run(main())

RuntimeError: asyncio.run() cannot be called from a running event loop

Indeed, in Databricks, we've already in a running loop:

import asyncio
asyncio.get_running_loop()

<_UnixSelectorEventLoop running=True closed=False debug=False>

Dockerfile with secrets

The most secure way to use secrets in a Dockerfile is to use the --secret flag in the docker build command. This way, the secret is not stored in the image, and it is not visible in the Dockerfile.

A common use case in Python world is to install packages from a private PyPI repository in a Dockerfile. Suppose during the CICD pipeline, there's an environment variable called PIP_INDEX_URL where holds this private PyPI credentials.

Check the official Build secrets doc.

First try on Quart an asyncio re-implementation of Flask

Flask is a little bit old-fashioned today (I know it's still widely used), as it's not async native, among others. When I prepared my fastapi-demo this weekend, I discovered a new framework called Quart, which is maintained by Pallet Project, the same community maintaining Flask. They said "Quart is an asyncio re-implementation of the popular Flask micro framework API. This means that if you understand Flask you understand Quart.". So I decided to give it a try.

Getting all users from MS Graph API in few seconds

MS Graph API's endpoint for retrieving users, GET /users can return all users of the tenant. The default limit is 100 users per page, and the maximum limit is 999 users per page. If there are more than 999 users, the response will contain a @odata.nextLink field, which is a URL to the next page of users. For a big company having a large number of users (50,000, 100,000, or even more), and it can be time-consuming to retrieve all users.

While MS Graph API provides generous throttling limits, we should find a way to parallelize the queries. This post explores sharding as a strategy to retrieve all users in a matter of seconds. The idea is to get all users by dividing users based on the first character of the userPrincipalName field.For instance, shard 1 would encompass users whose userPrincipalName starts with a, shard 2 would handle users starting with b, and so forth.