Generating dynamic meta image for your Next.js app

Generating dynamic meta image for your Next.js app

One of the features of Writings is marking a content as public and sharing it with others, via a URL. It was fun to implement that, as far as the sharing flow goes, but there was a small problem. The shared URLs were completely unpresentable when shared on social media. They looked dull. So I thought, I need to add something to spice the things up.

My first idea was to add generic meta image that will resemble an article. I thought why bother when SEO is not part of the features I am trying to sell. It was better, but again, these static images didn't satisfy my itch for aesthetics. I started doing a research on how the other platforms are doing this. Amongst the others, I discovered how Hashnode are doing it. Apparently via a 3rd party API called imgix (the API in particular is the Blending API). But to me that was obviously an over-complication, so I decided to give imgix a try only after I exhaust all other options.

As Writings has another option to share content: export a writing as social image and share it on Twitter, I thought to incorporate this logic for the meta image generation. Generate the image from the content and add it as value to theog:image tag. That seemed like a catchy idea, but it ultimately failed due to the fact that the image should be generated headlessly, meaning: without a browser. Think of a scenario where the user pastes a URL on Facebook and the preview is loaded beneath. At that point the content I wanted to make screenshot of does not exist as it's not rendered. So my second idea failed too.

One thing led to another and finally I discovered vercel/og-image (Open Graph Image as a Service). The library itself is doing exactly what I need: generate a meta image for every dynamic data that we want to handle. The only remark was that using NextJs with it, I was struck with CORS errors every time I wanted to generate the image. Usually when we are dealing with remote calls in NextJS, following the framework's design, we want to go via the built-inAPI routing. For those that for some reason read this article, but don't know what NextJs API routing is: it's a feature of the framework that allows you to execute asynchronous calls as part of the project's structure. API routes allow easy creation of an API endpoint as a Node.js serverless function. You can put a js function in a file that lives in pages/api/<something>.js and execute it (with fetch or axios). To make this happen for the og-image service, one has to implement the logic that would generate and return the image as a result. But at the end, in my case I need a URL only. So if we want to circumvent the boilerplate of the image generation, we can use a library that serves as a wrapper over the service and gives the possibility to use it as an API route.

Such library is nextjs-api-og-image.

How to use nextjs-api-og-image

The usual drill:

npm i next-api-og-image chrome-aws-lambda

The chrome-aws-lambda package is needed for the headless Chromium based browser (a rendering engine without the actual browser).

Once we have the dependencies, we need to create a function in the pages/api folder, something like: pages/api/generate-meta-image.js

The content of that file will determine the structure of the image. We will be using the withOGImage hook, with the filled-in template's HTML structure from which we want to generate the image, specify the caching and we'll define if we want to convert the binary image data to HTML that will help us debug during development. Overall, something like this:

export default withOGImage({
  template: {
    html: ({ title, description, detail }) => `
            <!-- The image content structure -->            
  cacheControl: 'public, max-age=604800, immutable',
  dev: { inspectHtml: false }

The html block will define how the image will look like. As we are operating on pure HTML here, React clauses are not allowed and will not be recognized. So we have to write HTML only. To generate an image like this: image.png

I am using the following HTML template:

        <link href="^2/dist/tailwind.min.css" rel="stylesheet">
    <body class="m-14 h-full">
        <div class="flex flex-col mt-72">                
            <h1 class="text-7xl font-semibold text-black truncate">${title}</h1>
            <p class="mt-4 text-4xl font-extralight text-black">${description}</p>
            <h2 class="mt-6 text-2xl font-semibold">${detail}</h2>

Luckily we can use Tailwind(.min) here! 😉

It's a very simple structure as the image is simple. But you can go crazy with your HTML and define a background, gradients, colors, css, styles, fonts, everything that HTML a̶s̶ ̶a̶ ̶p̶r̶o̶g̶r̶a̶m̶m̶i̶n̶g̶ ̶l̶a̶n̶g̶u̶a̶g̶e̶ can do.

An obvious question now would be: where do the input arguments come from (title, description and detail)? They are input parameters from the withOGImage function, that gets filled in from the query parameters when the function is called.

So what does the next-api-og-image provide? It enables the generation of the meta image, based on input paramters, as a simple URL. If we see how the function from above is used, it will be much clearer.

To generalize the Meta data generation, I am using a custom Meta component based on the Next.js' Head built-in component:

const Meta = ({ title, keywords, description, detail, withImage = false }) => {

  let image;
  if (title && description && withImage) {
    const encodedTitle = encodeURIComponent(title);
    const clearedDescription = getPlainText(description)
    const encodedDescription = encodeURIComponent(clearedDescription);
    const encodedDetail = encodeURIComponent(detail);

    image = `${encodedTitle}&description=${encodedDescription}&detail=${encodedDetail}`;

  return (
      <meta name="viewport" content="width=device-width, initial-scale=1" />
      <meta name="keywords" content={keywords} />
      <meta name="description" content={description} />
      <meta charSet="utf-8" />

      <meta property="og:url" content={url} />
      <meta property="og:type" content="website" />
      <meta property="og:description" content={description} />
      <meta property="og:title" content={title} />
      <meta property="og:image" content={image} />

      <meta name="twitter:card" content="summary_large_image" />
      <meta property="twitter:url" content={url}></meta>
      <meta name="twitter:site" content={detail} />
      <meta name="twitter:title" content={title} />
      <meta name="twitter:description" content={description} />
      <meta name="twitter:image" content={image} />

      <link rel="icon" href="/favicon.ico" />

      <link rel="icon" href="/favicon.ico" />

Meta.defaultProps = {
  title: "Writings",
  description: "A simple writing app",

export default Meta;

As you can see, the image generation is optional. I don't want to generate dynamic meta image for all of my pages. Just for the public ones. In the case when the withImage param is true, I take the title, description and detail params in consideration and pass them on as simple URL to the API route, which is the generate-meta-image function. The catch is on this line:

image = `${encodedTitle}&description=${encodedDescription}&detail=${encodedDetail}`;

Then set the image as og:image content value, regardless if it's null or it has a value. If we go to the URL of the function from the browser, we'll get the image generated and presented to us. If you see it, then it works.

This is like calling an endpoint from within our own project. We do a request, and the response is set as a value to the image variable. That is part of the magic that Next.js provides, but also the vercel/og-image service + the next-api-og-image library.


Solves the problem! 👌