Seamless Python Logging with Supabase: Store Your Logs in the Cloud

Harness the Power of Python's Logging Handlers to Persist and Query Logs in Supabase Postgres Database

ยท

8 min read

If you are using Supabase with Python and you are looking for a logging solution that works well, why not implement logging with Supabase itself? Having all your logs available in your database is a great low-cost method of making sure your warning and exception logs are queryable at all times. Granted, there are actual logging solutions available that would be much better than this implementation, but I think this simple solution has utility for home projects.

Python has a great logging utility allowing you to extend it with handlers and filters to create a logger matching your requirements.

๐Ÿšจ If you are not working with Supabase, but are interested in persisting your Python logs in a database, this article will still be of use to you. Simply replace the Supabase-specific insert statements with your own database logic and you are good to go!

Handlers

In many instances, we want logging to go to multiple places at the same time. We may want logs to be presented in the terminal as well as having them persisted in a file or even in a logging service such as Application Insights or Data Dog.

Python has a number of default handlers available to us out of the box. You can create a StreamHandler to log to the terminal and a FileHandler to log to a specific file at the same time:

import logging

logger = logging.getLogger("my-logger")
sh = logging.StreamHandler()
fh = logging.FileHandler("my-application.log")

logger.addHandler(sh)
logger.addHandler(fh)

logger.warning("This is a warning message")

In the above example, the warning message will be printed to the terminal and also persisted in the log file we specified.

Armed with this information, we can now start thinking about creating our own handler to persist these logs to our Supabase instance. If we were to look at the StreamHandler class, we would see the following:

# Lib\logging\__init__.py

class StreamHandler(Handler):
    terminator = '\n'

    def __init__(self, stream=None):
        Handler.__init__(self)
        if stream is None:
            stream = sys.stderr
        self.stream = stream

    def emit(self, record):
        try:
            msg = self.format(record)
            stream = self.stream
            # issue 35046: merged two stream.writes into one.
            stream.write(msg + self.terminator)
            self.flush()
        except RecursionError:  # See issue 36272
            raise
        except Exception:
            self.handleError(record)

The StreamHandler class inherits from the Handler class and has two methods we are interested in, namely the __init__ method and the emit method. The prior instantiates the class and the latter is triggered whenever we do something like logger.warning(msg).

When we create our class, we will also want to inherit from the Handler, but we will want to pass our Supabase client instance.

import logging
from supabase import Client

class SupabaseHandler(logging.Handler):
    """A handler class which writes logging records to Supabase."""

    def __init__(self, supabase: Client):
        super().__init__()
        self.supabase = supabase

Here we have created our handler and instantiated the Supabase client. Now we can look to tackling the emit method. To do this, we can add the following method to the above class:

def emit(self, record: logging.LogRecord) -> None:
    log = dict(
        name=record.name,
        level=record.levelno,
        level_name=record.levelname,
        message=self.format(record),
        created_unix=record.created,
        file_name=record.filename,
        path_name=record.pathname,
    )

    self.supabase.table("logs").insert(log).execute()

Here we are creating a dictionary from the log record. The log record contains a lot of information that we will want to persist in our database. The keys in this dictionary map directly onto the columns in our database, so ensure to create a table containing whatever you include in this dictionary.

After the creation of the dictionary, we complete a simple Supabase insert statement to insert the record into our logs table.

The logs table I created for this example looks like the following:

Now when we run the following code, we will get the logs presented in our terminal as well as being persisted in the database.

import logging
from supabase import create_client

supabase = create_client(SUPABASE_URL, SUPABASE_KEY)

logger = logging.getLogger("my-logger")
sb_handler = SupabaseHandler(supabase)
logger.addHandler(sb_handler)
logger.setLevel(logging.DEBUG)

logger.debug("This is a debug message")
logger.warning("This is a warning message")

However, there is one issue, the debug message logged here will be persisted in the database. In many instances, you will want to debug messages to be logged to the terminal, but you would not generally want them clogging up your database. In general, I like to persist anything above a warning level, though make your own decision based on requirements.

Filters to the rescue

We can create filters and apply them to handlers, allowing us to implement our own logic dictating if a log record should be handled by the handler or not. The logic you put in the filter can be as simple or complicated as your scenario requires, but filters are very simple to implement.

๐Ÿšจ Note: I know that the handlers have a setLevel method, allowing you to set the logging level for each of the handlers and therefore making this filter redundant, but I have had many issues where this has been ignored for some unknown reason. If setting the log level on the handler works for you, please use that method. Additionally, you may have more complicated logic concerning what to log and therefore filters may make sense for you anyway.

Again, we will want to create another class, but this time we will be inheriting from the logging.Filter class:

import logging

class WarningLevelAndAboveFilter(logging.Filter):
    def filter(self, record: logging.LogRecord) -> bool:
        return record.levelno >= logging.WARNING

We only need to have one method in the class, the filter method. This method must return a bool indicating if the log record should be handled by the handler or not. In this example, we have stated that a record should only be logged if its level is WARNING or above.

This is good enough for our use case, but we could make this more generic so it can be used by other handlers too:

import logging

class MinLevelAndAboveFilter(logging.Filter):
    def __init__(self, min_level_no: int):
        super().__init__()
        self.min_level_no = min_level_no

    def filter(self, record: logging.LogRecord) -> bool:
        return record.levelno >= self.min_level_no

In this version, we have created a filter that accepts a minimum log level in its initializer. This means we can pass a minimum log level when we instantiate it and therefore could set this to logging.INFO, logging.ERROR, or whatever we want.

Let's put this together:

import logging
from supabase import create_client

supabase = create_client(SUPABASE_URL, SUPABASE_KEY)

logger = logging.getLogger("my-logger")
logger.setLevel(logging.DEBUG)
sb_handler = SupabaseHandler(supabase)
sb_handler.addFilter(MinLevelAndAboveFilter(logging.WARNING))
logger.addHandler(sb_handler)

logger.debug("You can see me in the terminal, but not in Supabase")
logger.warning("You can see me in the terminal and in Supabase")

To tidy things up a little, we could create a function to create the logger for us, allowing us to easily create it anywhere in our codebase:

def make_logger(name: str, supabase: Client) -> logging.Logger:
    handler = SupabaseHandler(supabase)
    handler.addFilter(MinLevelAndAboveFilter(logging.WARNING))

    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)
    logger.addHandler(handler)
    return logger

Cleanup

Ok, so we have log records in our database, but if our application is running for some time, this table may start to get a bit unwieldy. It would be nice if we could configure the database to automatically clear down this table when the logs pass a certain age. To accomplish this, we are going to create a CRON job to run periodically and delete any log records that are 30 days or older.

We first need to enable the PG_CRON extension within Supabase. In your Supabase project, click the database navigation menu on the left and then go to Extensions, search for PG_CRON and ensure the toggle is active. You will be asked which schema this is for, leave this at Extensions as this is the schema where the data for your jobs will be persisted.

Now in a new SQL Editor window, enter the following script:

select
  cron.schedule(
    'clear-down-logs-every-day', -- job name
    '0 0 * * *', -- every day at midnight
    $$
    delete from logs
    where to_timestamp(created_unix) < now() - interval '30 days'
    $$
  )

This select statement creates a CRON job on your Postgres database. The first parameter of the cron.schedule function is the name of the job and the second is the schedule the job will follow. The schedule 0 0 * * * will run every day at midnight. The schedule can be read as follows:

  • The zeroth minute;

  • of the zeroth hour;

  • on any day of the month;

  • on any month of the year;

  • and on any day of the week

๐Ÿ”‘ takeaway, a great website for generating CRON schedules: https://crontab.guru/

The SQL script to be executed is detailed between the two sets of double-dollar symbols. In our case, this is a simple delete statement that deletes any log records that have a created time older than 30 days, but feel free to use your own logic here.

Once you run the script, you can navigate to the table editor and select the cron schema. You will note the job table, which details the jobs you currently have, and the job_run_details table that gives information concerning completed jobs.

If you have a CRON job you would like to remove, simply run the following function, giving the name of the job to be removed:

select cron.unschedule('clear-down-logs-every-day');

Summary

And there we have it, a nice easy method of persisting logs in your Supabase Postgres database and we've even made sure these logs don't get too much for us by creating a Postgres CRON job to delete old log records.

Python's logging handlers and filters are simple and powerful enough to allow us to persist log records anywhere accessible via the Python programming language, so the possibilities are essentially endless.

We could expand on this implementation by creating an interface allowing logs to be actioned and notes to be added, essentially creating a more complete logging system.

ย