Reputation: 153
I write the following schema in the Drizzle recommended syntax in order to initialise my project's database in PlanetScale (MySql). After completing the migration process and trying to npx drizzle-kit push:mysql
, I got the following error:
No config path provided, using default 'drizzle.config.ts'
...
Error: foreign key constraints are not allowed, see https://vitess.io/blog/2021-06-15-online-ddl-why-no-fk/
at PromiseConnection.query (/Users/jcbraz/Projects/sound-scout-13/web-app/node_modules/drizzle-kit/index.cjs:34122:26)
at Command.<anonymous> (/Users/jcbraz/Projects/sound-scout-13/web-app/node_modules/drizzle-kit/index.cjs:51859:33)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'ER_UNKNOWN_ERROR',
errno: 1105,
sql: 'ALTER TABLE `playlists` ADD CONSTRAINT `playlists_user_id_users_id_fk` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE no action ON UPDATE no action;',
sqlState: 'HY000',
sqlMessage: 'foreign key constraints are not allowed, see https://vitess.io/blog/2021-06-15-online-ddl-why-no-fk/'
}
Here's the schema according to the Drizzle ORM documentation:
import { boolean, decimal, int, mysqlTable, text, timestamp, tinyint, uniqueIndex, varchar } from 'drizzle-orm/mysql-core'
import { type InferModel } from 'drizzle-orm';
export const users = mysqlTable('users', {
id: varchar('id', { length: 50 }).primaryKey(),
email: varchar('email', { length: 320 }).notNull(),
first_name: varchar('first_name', { length: 50 }),
last_name: varchar('first_name', { length: 50 }),
credits: int('credits').notNull().default(5),
stripeCustomerId: text('stripeCustomerId')
});
export const playlists = mysqlTable('playlists', {
id: varchar('id', { length: 30 }).primaryKey(),
created_at: timestamp('created_at').notNull().defaultNow(),
user_id: varchar('user_id', { length: 50 }).references(() => users.id),
}, (playlists) => ({
userIndex: uniqueIndex('user_idx').on(playlists.user_id)
}));
export const products = mysqlTable('products', {
id: tinyint('id').autoincrement().primaryKey(),
price: decimal('price', { precision: 3, scale: 2 }).notNull(),
active: boolean('active').default(false),
name: varchar('name', { length: 30 }),
description: varchar('description', { length: 250 })
});
export type User = InferModel<typeof users>;
export type Playlist = InferModel<typeof playlists>;
export type Product = InferModel<typeof products>;
After writing the schema, I ran npx drizzle-kit generate:mysql
which generated the migration and the correspondent .sql file successfully.
-- UPDATE --
Found this really good explanation on PlanetScale approach on Foreign keys: https://github.com/planetscale/discussion/discussions/74
Upvotes: 4
Views: 6352
Reputation: 360
PlanetScale automatically shards your database, which means that it creates multiple SQL servers that break up your database tables, and when this happens the autoincremented primary keys are no longer the same, so you can't use them to look up rows anymore. This makes it so you can't use those autoincremented indexes as foreign keys. There is a detailed article from PlanetScale here. For this reason, you will need to use an alternate solution to generate your unique IDs to search for in your SQL tables.
You need to know a little bit about how data is stored on disk in SQL, which is using a B-Tree.
The way you are going to search for SQL table rows is generally by index. Because it's a B-Tree, it's fastest to do a binary search. For this reason, you need to be able to generate unique IDs for your rows.
While you might be tempted to use UUID, the problem with UUID is that the values are not sequential. UUID also uses your network card MAC address which may or may not be a security hazard, but I think the MAC addresses are randomly generated now.
It's going to be better to use a Universally Unique Lexicographically Sortable Identifier (ULID), which you can npm install ulid
. The UUID uses a millisecond timestamp with 80 bytes of random data.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_time_high |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 16_bit_uint_time_low | 16_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Because the timestamp data is in the most significant bits, it means that the random number bits will be smaller values, so the values will be not be linearly sorted, you have to use the monotonicFactory :
import { monotonicFactory } from 'ulid'
const ulid = monotonicFactory()
// Assume that these calls occur within the same millisecond
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
Generating random numbers is EXTREMELY slow and there are better hardware-specific solutions. One such option is to read the time off of the network controller, which have picosecond clocks.
I'm currently unaware of how JavaScript generates its millisecond clock because x86 does not have a built-in millisecond clock, ARM CPUs do, so you have to use a thread to make a spinner timer in a loop where you increment an integer ticker, then check the current time and if it's the next second then reset the ticker and calculate how many clock ticks per second there was. You can divide the clock tick number by the total number of ticks to convert to seconds. But for the purpose of a database index, you can just disregard converting the clock ticks to seconds and just use the clock ticker value. To read the clock you have to use an interprocess pipe. While this may be complicated, it's probably the option that will work on the most number of servers.
The problem with the ulid npm package is that it only outputs a string and we want an integer. We would want to store the ULID as two different 64-bit integer values.
You could just skip that step and instead, just use the microsecond timestamp from JavaScript new Date().getTime()
stored as a 64-bit integer and then use a 64-bit random number. You can also use the MAC address of the system if it is randomly generated. While not as secure as 80-bits, it's good enough.
I'm not a fan of having to pay to generate random numbers, so the npm package I created for this problem is called LinearID (LID) (npm install linearid
). My solution uses two 64-bit BigInt words where the MSB is a millisecond timer bit shifted up 22-bits ored with a 22-bit spin ticker in the LSb. The MSB is a cryptographically secure random number. This gives up an upper limit of 4,194,303 calls to LIDNext()
per second. When there is 4,194,303 calls to LIDNext()
in one millisecond, the system will wait until the next millisecond. The other opion was ULID, which generates 80-bits of random numbers for each uid, which would not be able to create 4,194,303 ULID per second so it works out.
import { datetime, mysqlTable, varbinary } from "drizzle-orm/mysql-core";
export const UserAccounts = mysqlTable('UserAccounts', {
uid: varbinary('uid', { length: 16}).primaryKey(),
created: datetime('datetime'),
//...
});
const { LID, LIDPrint, LIDParse, LIDSeconds } = require("linearid");
[msb, lsb] = LID();
const Example = LIDPrint(msb, lsb);
console.log('\nExample LID hex string:0x' + Example);
[msb2, lsb2] = LIDParse(Example);
const TimeS = LIDSeconds(msb);
SQL table rows are unsorted, so you can't just insert the data sorted. TO get around this you're going to want to search for data by the uid/LID and the timestamp. Because LID stores the millisecond timestamp as the MSB, it's quick to extract the second timestamp from the LID to pass into the SQL engine. Currently I don't know how to pass in a 128-bit varbinary in Drizzle ORM so please comment bellow with correct code. Thanks.
await db.select().from(UserAccounts).where(
and(
eq(users.created, TimeS),
eq(users.uid, LID())
));
Upvotes: 4