Adding likes to a static blog site using Cloudflare Workers and D1

November 14, 2023

☕️ ☕️ 10 min read

I’ve recently added a nice little “like” button to the blog posts, and it got me thinking - it’s the perfect case study for an isolated full-stack component. So, why not write a new blog post about how I made it happen?

First off, I knew I’d have to set up an API endpoint and handle data persistence in a database. In these kind of situations, where I need to quickly implement some modest backend functionality in a static site, my go-to technology is Cloudflare.

Cloudflare offers this nifty feature called Workers, allowing you to easily deploy serverless API endpoints across their Edge network spanning 300 global locations.

Plus, they’ve recently rolled out D1, a brand-new SQLite-based database solution. At the time of writing, D1 was still in open alpha, and despite a few limitations, I decided it was worth giving it a try for this blog site.

To fully leverage Cloudflare’s Workers Edge runtime, I integrated the backend framework Hono and used Drizzle as the database ORM.

Finally, I implemented the button component in Preact (a lightweight alternative to React with the same API) and used some client-side optimization strategies like request batching and optimistic updates.

Curious for more? Keep reading; this approach can be extended to accomodate various backend functionalities which you might find useful for your own projects.

Getting ready

To begin, you’ll need to create a Cloudflare account if you haven’t already. This account will grant you access to your dashboard, where you can set up and monitor your Workers and the D1 database.

Next, get the Workers command-line interface, Wrangler. Wrangler facilitates the creation, testing, and deployment of Workers projects. You can install it in your global packages with npm -g wrangler or use it directly with npx commands.

Running the command wrangler dev will start a local server at http://localhost:8787 for developing our Worker. Be aware that attempting to access this route will result in a 404 error until we build our API.

Additionally, create a wrangler.toml file at the root of your project. This file will serve as the configuration for our Worker. To kick off the configuration, add the following to the file:

name = <WORKER_NAME> # i.e. "my-worker"
main = <WORKER_ENTRY_POINT> # i.e "src/api/index.ts"
compatibility_date = "2023-08-07"
node_compat = true

Setting up the database

Creating our D1 database with Cloudflare is straightforward. Simply execute the following command:

wrangler d1 create <DATABASE_NAME>

This command will generate a new D1 database and provide the binding configuration, which will be necessary for the next step. You should see something like this on your command prompt as a result of executing the previous command:

[[d1_databases]]
binding = "DB"
database_name = "<DATABASE_NAME>"
database_id = "<unique-ID-for-your-database>"

Connecting your Worker with the D1 database

To establish a connection between your Worker and the D1 database, you’ll need to create a binding. Bindings enable your Workers to access resources, such as D1, on the Cloudflare developer platform.

You can create bindings by updating the wrangler.toml file we created earlier. To bind your D1 database to your Worker, add the following at the end of the file:

[[d1_databases]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "<DATABASE_NAME>"
database_id = "<unique-ID-for-your-database>"

The string you set for <BINDING_NAME> will be used to reference this database in your Worker. It will be accessible in your Worker’s environent as env.<BINDING_NAME>, as we’ll explore later on. Feel free to choose any valid JavaScript variable name. In this case, I’ve simply used DB.

Installing additional dependencies

Install Hono and Drizzle by running:

npm install hono drizzle

If you use Typescript, you’ll also need to install the types for Workers:

npm install -D @cloudflare/workers-types

Optionally, add the installed types to the compilerOptions in your tsconfig.json file. By doing this, you can directly access the Cloudflare Workers-specific types and interfaces within your TypeScript code without requiring explicit import statements:

{
	"compilerOptions": {
		"types": ["@cloudflare/workers-types"]
	}
}

Building the database schema

With our D1 database set up, it’s time to create the schema using Drizzle. One of the standout features of Drizzle is its ability to define and manage the database schema directly in TypeScript.

Let’s start by creating the Posts table, where we’ll store each post’s slug along with the total number of likes:

export const posts = sqliteTable(
	'posts',
	{
		id: integer('id').primaryKey({ autoIncrement: true }),
		slug: text('slug').notNull(),
		totalLikes: integer('total_likes').default(0)
	},
	table => ({
		slugIdx: uniqueIndex('slugIdx').on(table.slug)
	})
)

The posts will be queried by their slug in most use cases, thus the index.

Now, let’s proceed to create the table responsible for tracking user likes:

export const userLikes = sqliteTable(
	'user_likes',
	{
		userId: text('user_id'),
		postId: integer('post_id').references(() => posts.id),
		likes: integer('likes').default(0)
	},
	table => {
		return {
			pk: primaryKey(table.userId, table.postId)
		}
	}
)

At the time of writing this article, there was a bug with D1 when trying to left join two tables that have the same column names, for which the second column is not returned. Hence why I used different naming for the likes and views in each of the tables.

Whenever you apply changes to the schema, you’ll need to run the following command to generate a SQL migration (for my own convenience, I’ve added this as a script in package.json.):

drizzle-kit generate:sqlite

This script will generate a migration file with a name resembling 0000_short_lockheed.

To initialize the local database and run the server locally, use the following commands, replacing FILE_NAME with the name of the generated migration file:

wrangler d1 execute <DATABASE_NAME> --local --file=./drizzle/<FILE_NAME>.sql
wrangler dev --local --persist

Note: Alternatively, if you are just prototyping, Drizzle allows you to use the db push command, which applies the schema changes to your development database without generating a migration file.

Crafting Web APIs with Hono

Hono is a backend framework, much like Express but for the Edges. Hono is fully compatible with Cloudlare Workers runtime and with just a few lines of code, we can have a fully functional Worker set up, as demonstrated below:

const app = new Hono()

app.get('hello-world', c => c.text('hello world!'))

This will create a GET endpoint accessible locally at http://localhost:8787/hello-world.

This will serve as the foundation for the subsequent steps, where we will expand upon this to build the remaining backend functionalities.

Storing user likes

Our primary objective is to define an endpoint for storing both post and user likes. This endpoint, which accepts POST requests, will be triggered by the “like” button.

I’ve designated the route for this endpoint as /api/posts/:slug/like, making the post slug easily accessible via the request parameters within the request context: c.req.param('slug').

To persist the data in the database, we’ll utilize the database binding previously defined, which is also readily accessible within the request context.

Additionally, these bindings can accomodate environment variables, which can be set in development environment by creating a .dev.vars file, and in the deployed worker, by executing wrangler secret put <KEY>. In my case, I included a salt value named IP_ADDRESS_SALT, which will be used later for hashing IP addresses.

Now, we can initialize Hono with our customized bindings, ensuring proper typing:

type Bindings = {
	DB: D1Database
	IP_ADDRESS_SALT: string
}

const app = new Hono<{ Bindings: Bindings }>()

Then, we can consume the request context and initiate a new database connection as shown below:

app.post('/api/posts/:slug/like', async c => {
	const db = drizzle(c.env.DB)
	// rest will be added later ...
}

Identifying users on each request

A crucial aspect of registering posts likes is establishing a reliable user identification method. With the goal of allowing users to like posts without mandatory authentication, I used a method to identify users based on their origin IP address within each request. To ensure user privacy and avoid retaining sensitive data, I also opted to hash this information.

Accessing the request IP is straightforward by utilizing the Cloudflare Worker-provided header cf-connecting-ip within the request context. Subsequently, I used the SHA-256 encryption algorithm along with a salt (stored as the environment variable IP_ADDRESS_SALT mentioned earlier) to generate the user ID, as demonstrated below:

const ipAddress = c.req.header('cf-connecting-ip') || '0.0.0.0'
const salt = c.env.IP_ADDRESS_SALT
const currentUserId = await hashIpAddress(ipAddress, salt)

For brevity, I have omitted the detailed definition of the hashIpAddress function here, but you can find it in the source code for reference.

Persisting to the database

Now, we need to save the relevant information in the database. My approach for this was to first fetch the statistics about the corresponding post and user, performing a left join on both tables to validate, amongst other things, that the user hasn’t reached the maximum number of likes:

const post = await db
	.select({
		postId: posts.id,
		userLikes: userLikes.likes
	})
	.from(userLikes)
	.leftJoin(posts, eq(posts.id, userLikes.postId))
	.where(and(eq(posts.slug, slug), eq(userLikes.userId, currentUserId)))
	.get()

Right after, we perform two upsert statements in order to update both the Posts and UserLikes tables. At the time of writing, there wasn’t a way to perform transactions on D1 using Drizzle. Thus, I opted to perform two sequential upsert queries:

const updatedPost = await db
	.insert(posts)
	.values({ slug, totalLikes: likesIncrement })
	.onConflictDoUpdate({
		target: posts.slug,
		set: { totalLikes: sql`total_likes + ${likesIncrement}` }
	})
	.returning()
	.get()

db.insert(userLikes)
	.values({
		userId: currentUserId,
		postId: updatedPost.id,
		likes: likesIncrement
	})
	.onConflictDoUpdate({
		target: [userLikes.userId, userLikes.postId],
		set: { likes: sql`likes + ${likesIncrement}` }
	})

To fetch the likes of a post, we take a similar approach by identifying the post and the user, and finally, we query the database to retrieve the corresponding data. You can see the whole implementation in the source code.

Deploying the Worker and database

We are almost ready to deploy our Worker and D1 database globally. First, log in with your Cloudflare account by running:

wrangler login

And deploy the database by running:

wrangler d1 execute <DATABASE_NAME> --file=./schema.sql

And finally the Worker:

wrangler deploy

Now you should have the worker deployed. You can preview it at <YOUR_WORKER>.<YOUR_SUBDOMAIN>.workers.dev.

The like button component

For the like button, I created a component in React with a design inspired by Delba Oliveira’s design, with a few changes and tweaks, including a playful confetti animation when the user reaches the maximum number of likes. You can play with the componnet here (it will reset the likes count after 3 seconds of reaching the maximum):

0 likes

When a user interacts with the like button on the post, a couple of things happen behind the scenes to make the experience smooth and efficient.

Initializing likes data

Initially, when the component mounts, it fetches and stores both the total post likes and the user’s likes in state variables. This provides the necessary data for the current state of likes:

const [likes, setLikes] = useState(0)
const [userLikes, setUserLikes] = useState(0)
const [status, setStatus] = useState<'loading' | 'success' | 'error'>('loading')

useEffect(() => {
	const fetchStats = async () => {
		const response = await fetch(`${apiHost}/api/posts/${slug}/stats`)
		if (response.status !== 200) {
			setStatus('error')
			return
		}
		const data = (await response.json()) as {
			totalLikes: number
			userLikes: number
		}

		setLikes(data.totalLikes)
		setUserLikes(data.userLikes)
		setStatus('success')
	}

	fetchStats()
}, [])

Optimistic updates and request batching

Now, when a user decides to like a post, there’s what we call an “optimistic update”. This means that, even before the server responds to the actual like being registered, the user interface is immediately updated. This instantaneous feedback makes the button click feel more responsive and improves the overall user experience.

Since I allow the user to like a post multiple times, the component doesn’t send a request to the server for each individual like. Instead, it batches these requests through a timeout:

const likesIncrement = useRef(0)

useEffect(() => {
	if (likesIncrement.current === 0) {
		return
	}

	function likePost() {
		fetch(`${apiHost}/api/posts/${slug}/like`, {
			method: 'POST',
			body: JSON.stringify({ count: likesIncrement.current })
		})
		likesIncrement.current = 0
	}

	const timeoutId = setTimeout(likePost, 1000)

	return () => clearTimeout(timeoutId)
}, [userLikes])

The timer is reset with each click, and the number of clicks between timeouts is tracked through the reference value likesIncrement. When the timeout is triggered, the request is sent and the reference value is reset.

Finally, on each button click we call the following function to update all the values and trigger the described actions:

const handleClick = () => {
	setLikes(likes + 1)
	setUserLikes(userLikes + 1)
	incrementLikes.current += 1
}

Conclusion

Thank you for reading this far! Crafting this like button feature has been a fun process and hopefully this walkthrough will spark some inspiration for your projects.

Feel free to reach me out on Twitter if this has been useful to you or you have any suggestions for improving it.

Happy coding!