The Challenges of Migrating from Firestore to Supabase, or How a Bug Made me Migrate
I am not a fan of migrations because of everything you need to do in preparation for the process. I like to experiment and break things, but migrating logic or data has always been a dreading task.
For Writings I have been using Firestore ever since I started. The reasons are mainly because it's free, easy to use and integrate. Also, two years ago I thought, NoSQL was the future and yada-yada. Well, it turned out yes, document-based databases are the future if you need them. And most of the time you need a traditional relational database.
Without going further into the pros and cons of Firestore, the first thing that signaled me it was not for me was the inability to perform more complex operations like joins and merges of data from two separate entities/collections, to analyze the data and get more insights about users' behavior. For example, at some point, I wanted to check how many users have written more than 5 articles in the past 3 months. If you do not approach it programmatically, it's impossible through the console. And I don't want to spend more time coding other than the one meant to build features.
This experience didn't render Firestore a bad service. It meant it didn't work for me. I needed a relational database. And I carried that concern until I decided to launch Writings v2.
Ten days before my projected date for the v2 launch, I noticed something peculiar on the Firebase Console dashboard. The read operations for one of the collections went up to 140k (yes, 140.000).
The next few days, I got charged 10 cents which served as a reminder of what could happen had that problem scaled. What can I do, I am frugal.
Turned out I had a bug in the logic and I bombarded my database with read requests. Have I had this deployed in production, 10 active users would be enough to bring me to 1M requests overnight.
Honestly, my project is not that large so I can put trust in a cloud solution. Almost all of them are pay-as-you-go, and almost all of them offer the convenience you need so that you stick to them. Fine. But also combined with the previous reasons it was about time to move things to something more trustworthy. And what's more trustworthy in the indie scene than Supabase?
The amazing fact, I didn't know for Supabase is that you can scale your plan as you scale your product. You can use the free tier as long as it makes sense for you. If you do not need the benefits from the paid plan ($25/m at the moment), you can stay on the free plan forever.
The strongest attribute of Supabase is its community. It's crazy how spread the word about it is. Libraries, subreddits, Twitter, it's everywhere. The documentation and SDKs are a breeze and integrating it into my project was nothing but 10 minutes of work. The question that opened the moment I finally decided to migrate was, what should happen with the data of the users? Especially since I had not much clue how much I've clustered it across the collections/documents.
The further problem is that I am using Firebase Auth as an authentication service. I like it, it's simple and I didn't have the comfort of migrating to anything else at the moment. In Firestore Auth, the IDs of users are UUIDs that look something like 'aXW0fqoijdkaj2913'. I have used that ID to store the user document in the database and then request it (per id) when needed.
Another part of the authentication trap was that the Row Level Security, with an external authentication service, is not that easy to solve. But we'll get there later.
The User's Identity
The first challenge I had to solve with Supabase was, how to uniquely define users without using their Firebase identity (the ID). Turned out, it was easy to solve. The document-based databases give you all the liberty to write whatever you want to your database (which makes it a double-edged sword). Which is fine until you realize that some constraints would be useful. Like for example, my user document had an email as a field, but nothing prevented anything (me, or a bug) from storing a document with the same email again. I could not define the uniqueness of the data.
As Supabase is just an interfaced PostgreSQL, I could define the email field from the users' table to be unique. Easy peasy. Then I started querying and identifying users per email (there shouldn't be two separate users with the same email in the table).
For example, being able to set this trigger makes most of my logic so much simpler:
Why Good Development Practices Matter
The second challenge was how to find all references in the code to Firestore. I was not following the good and healthy practices of isolating code layers or providing proper architecture. My database code was everywhere.
This is why interfacing and good coding practices fit well even for indie projects. Wherever possible, always separate the logic between the "layers" the right way. I know this rings a wrong bell amongst indie hackers, but as I learned so far, all good practices that exist in the modern, corporate world, will eventually have to be implemented for indie projects in case they grow. Starting with a good and proper code setup is an advantage in itself.
My approach was to centralize the basic Supabase API calls behind something that means database layer, find all references to the ‘firestore’ object in my code, and refactor the logic accordingly.
Eventually, after 5 days, I made it. So then the question of migrating the data came back into perspective. What I did to tackle this was, I wrote a small JS that did this for me. I uncommented the save-calls the moment I was sure that the data structure was good. I am still not 100% sure if everything was migrated properly but, I have a good feeling just by observing the database, it was.
To migrate the data from Firestore to Supabase you can follow two approaches, the one I did was, a completely custom solution, as I was still not aware that option two exists, or, well, option two, using the migration scripts that Supabase offers. I strongly encourage you to go with option two.
Postgres/Supabase RLS with third-party authentication service
If you decide to use another Auth provider, like I use (Firebase Auth), you will need to hack the enabling of the integration of your auth session to your database, to provide proper RLS rules. What is RLS?
Supabase Auth is designed to work perfectly with Postgres Row Level Security (RLS).
You can use RLS to create Policies that are incredibly powerful and flexible, allowing you to write complex SQL rules which fit your unique business needs.
It's something similar to what Firestore also has which is called Rules, which in practice is a set of rules that define who and under what circumstances can read from your database. An additional level of security. Using Supabase Auth makes this very easy to integrate. But using Firebase Auth you will have to incorporate JWS from Firebase and use that to set the Auth session within Supabase. I didn't make this up, I read a good article about it, written by Grace Wang here. Be sure to find something that works so that you enable your RLS, otherwise there is a big risk that whoever has your API key, will be able to do pretty much everything with your database.
Conclusion
Instead of a conclusion, I want to share two sentiments that overwhelmed me during this process:
Happy: for I managed to move the data to something over which I have more granular control;
Insecure: as this is still a new ground for me and I am still not sure how it will scale in case my user base scales. But I completely trust the service.
Again, NoSQL is not bad because it doesn't work for your use-case. There a plenty of products that actually use Firestore or similar and that works well for them. For me it didn't and I had to move on.