Routing based on user agent with my CRA and netlify functions

51 views Asked by At

I have been working on a CRA for the past year. My routes are being controlled by the browserrouter package. Essentially, each route is its own customer, that each have their own assets. Each company has a folder in the public directory, with their respective assets and a data.json file that holds the asset information, as well as the id associated with each assset.

So for example, website.com/customer1/ would grab those assets and would switch out background elements, fonts etc depending on what information was in the data.json file.

The issue I am currently facing is the lack of SEO support that CRA offers. I am currently using helmet for the after-loaded JavaScript, but when it comes to crawlers, this won’t cut it.

I’ve been expiramenting with netlify functions, and am definitely able to control the websites response conditionally, but I cannot figure out how to let normal users into website. I keep running into infinite redirect loops.

Here is what my toml file looks like


[build]
  command = "CI=false npm run build"
  functions = "functions"
  publish = "build"

[dev]
  command = "npm start" # Command to start your server
  targetPort = 3000 # The port for your local server
  port = 8888 # The port that Netlify Dev will use
  functionsPort = 34567 # Port for Netlify functions, if you're using them
  autoLaunch = true # Automatically opens a browser window

[[redirects]]
  from = "/*"
  to = "/.netlify/functions/meta-server-function"
  status = 200
  force = true

And here is my function, I’ve been trying redirects, serving the raw html page but it’s not working, and from what I’m understanding is the server less function exists outside of the build folders scope.


exports.handler = async function(event, context) {
  // Check the user-agent from the request headers
  const userAgent = event.headers['user-agent'].toLowerCase();
  const isCrawler = userAgent.includes('googlebot') || userAgent.includes('facebookexternalhit'); // ... other crawlers


   const pathParameters = event.path.split("/");
   const customer = pathParameters[1]; 

  if (isCrawler) {
    // Extract gameID from query parameters
    const { gameID } = event.queryStringParameters;
    const gameData = await fetchGameData(gameID);
   


    // Construct the dynamic meta tags with the game data
    const metaTags = `
      <title>${gameData.title}</title>
      <meta name="description" content="${gameData.description}">
      <meta property="og:title" content="${gameData.title}">
      <meta property="og:description" content="${gameData.description}">
      <meta property="og:image" content="${gameData.imageUrl}">`;

    // Generate the full HTML response
    const htmlResponse = `
      <!DOCTYPE html>
      <html lang="en">
      <head>
      ${JSON.stringify({
        event: event,
        pathParameters: pathParameters,
        customer: customer,
      })}
        ${metaTags}
     
      </head>
      <body>
       
      </body>
      </html>`;

    return {
      statusCode: 200,
      headers: { 'Content-Type': 'text/html' },
      body: htmlResponse,
    };
  } else {
    return {
      statusCode: 200,
      headers: { 'Content-Type': 'text/html' },
      body: `
      this should be the normal page

      `
    }; 
  }
};

Is there any other way I can do this? I’ve been looking into using a reverse proxy but if functions can handle it that would be awesome

0

There are 0 answers