Development

Implement a Node.js API using ChatGPT

Let AI work for you before jobs are replaced by AI

2023年3月5日
Implement a Node.js API using ChatGPT
本文共有1552字,预计阅读时间7分钟
In the last version of the blog, I didn't implement the hope of displaying EXIF and geolocation information of photographic images. At that time, a flag was set up to be implemented in the next version.
Because for a photography-themed website, EXIF and shooting location are very important information for others, and without this information it will look unprofessional.
Recently developing a new version of the blog, I decided to implement this feature. I went through some detours in the middle, and finally, with the inspiration and help of ChatGPT, I finally achieved my goal. 

General Ideas

Both shooting data and GPS are part of EXIF information, so these two problems are actually one.
The simplest solution for the front end is to extract the EXIF information and store it in the database when uploading pictures, and directly request the interface when using it. But I use the open source Headless CMS solution Strapi, which does not have this function. I believe that developing a plug-in can definitely achieve this function, but this is beyond the scope of my ability.
Data exists in Strapi, but images exist in object storage, and the CMS actually returns only one image URL. If you want to extract information, you can only download the picture from this URL first, and then analyze it.
NPM has ready-made packages, such as exifr, and the EXIF information in it will be returned when the picture is passed in. So if I get the original image and pass it in, I should be able to get the EXIF, and then use useEffect to render the data to the interface.

Trying

Download the obtained image link and pass it to the parse() method of exifr:
1const res = await fetch(url);
2const blob = await res.blob();
3const rawData = (await exifr.parse(blob)) || {};
If it is a local picture, you will find that EXIF can be extracted successfully. But with the above code, it was found that it could not be obtained successfully. Looking at the error message will reveal that this behavior triggers the browser's CORS restrictions.
What is CORS?
According to MDN:
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. CORS also relies on a mechanism by which browsers make a "preflight" request to the server hosting the cross-origin resource, in order to check that the server will permit the actual request. In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in the actual request.
It means that the browser does not allow you to send requests to other servers at will, unless the server allows it. In this example, you initiate a GET request to the OSS domain name on localhost, which is blocked by the browser. To solve this problem, you can configure it on the OSS side to allow external domain names to send such requests.
But I used an OSS from an unknown manufacturer, and I was not given anything except an access key, not even a graphical interface for managing files. So there's not much you can do on the OSS side.
If I set up a layer of CDN for OSS and resolve it to my own domain name, won't I be able to avoid cross-domain?
It's really okay. When I configured the CDN and domain name for the image OSS, and set Access-Control-Allow-Origin on the CDN, the page finally got the EXIF information and rendered it on the interface as I wished.

Millionaire Behavior

I am satisfied with the place to go, watching the shooting information loaded, ah, so beautiful.
But when I glanced at the CDN data at night, I was intimidated.
Why did I use more than 1G of traffic by myself?
After thinking about it, I found the problem with this design.
I use next/image to compress and optimize all the pictures on the web page. In practice, the pictures are not directly downloaded from OSS, but downloaded by the server and returned to the browser after compression. This can reduce the image size and improve the loading speed.
This is no problem.
The problem is with the process of getting EXIF. Every time it is loaded, the browser will download the original image through the CDN and extract the EXIF.
Downloading a few megabytes of pictures is just to get that bit of shooting data, but what the user sees is still a compressed picture.
In other words, it took more traffic than the original image, but the quality of the image was not as good as the original image. Then what am I trying to figure out.
This solution will definitely not work. I spent 1G traffic alone. I don’t know how much it will cost after I go online.

Inspiration from ChatGPT

I can't think of any solution, so I have to consult ChatGPT.
It tells me that I can write an interface on the server, and the server will download the original image and extract information. We only need to pass the image URL to this interface, and the return is EXIF.
When I saw this proposal, I rejected it, because I never dared to think about how I could write the background for a design. At present, the front-end and operation and maintenance are enough for me to have a headache. But after thinking about it, its suggestion actually makes sense.
  1. My server has a lot of traffic, so there is basically no need to worry about running out of traffic;
  2. The results of the query can be cached, but the actual traffic is not that large.
Then let ChatGPT help me implement this API!

First Node.js program

It tells me that to provide an interface, it can be implemented using Express.
The idea of each next step is what ChatGPT told me, and I just slightly modified the structure.
First define the route:
1const express = require("express");
2const app = express();
3const port = 1216;
4
5app.get("/exif", async (req, res) => {
6  //在这里执行查询
7});
8
9app.listen(port, () => console.log(`Exif查询接口正在运行 ${port}!`));
By extracting the information flow, the logic implemented by the previous front-end can be reused:
1const exifr = require("exifr");
2const axios = require("axios");
3
4async function getExif(url) {
5  try {
6    const response = await axios.get(url, { responseType: "arraybuffer" });
7    const rawData = await exifr.parse(response.data);
8    exif.Maker = rawData.Make || "unknown";
9    exif.Model = rawData.Model || "unknown";
10    exif.ExposureTime = formatShutterTime(rawData.ExposureTime) || "unknown";
11    exif.FNumber = rawData.FNumber || "unknown";
12    exif.iso = rawData.ISO || "unknown";
13    exif.FocalLength = rawData.FocalLength || "unknown";
14    exif.LensModel = rawData.LensModel || "unknown";
15    return exif;
16  } catch (err) {
17    throw err;
18  }
19}
formatShutterTime() is a function that converts the acquired raw shutter data (0.0001) into a format more familiar to photographers:
1function formatShutterTime(shutterTime) {
2  if (!shutterTime) return "0";
3  const time = parseFloat(shutterTime);
4  if (time >= 1) {
5    return time.toFixed(2);
6  }
7  const fraction = Math.round(1 / time);
8  return `1/${fraction}`;
9}
GPS is also the same method, you can extract these two methods into a separate file and call it at the main entrance.
1const exifr = require("exifr");
2const axios = require("axios");
3
4const GPS = {};
5
6async function getGPS(url) {
7  try {
8    const response = await axios.get(url, { responseType: "arraybuffer" });
9    const rawData = await exifr.parse(response.data);
10    GPS.latitude = rawData.latitude || 0;
11    GPS.longitude = rawData.longitude || 0;
12    return GPS;
13  } catch (err) {
14    throw err;
15  }
16}
17
18module.exports = getGPS;
Originally, its answer had no caching logic. After my reminder, it recommended using node-cache and provided the following code:
1app.get("/exif", async (req, res) => {
2  const url = req.query.url;
3  const cacheKey = `exif:${url}`;
4  let exifData = cache.get(cacheKey);
5
6  if (!exifData) {
7    try {
8      exifData = await getExif(url);
9    } catch (err) {
10      return res.status(500).send(err.message);
11    }
12    cache.set(cacheKey, exifData, 3600);
13    res.json(exifData);
14  } else {
15    res.json(exifData);
16  }
17
18});
The meaning of the above paragraph is that after receiving the request, first query in the cache, if there is a cached result, it will be returned directly; if not, the result will be stored in the cache and returned.

Result

The completed code can be viewed here.
Run Node to see if the API is successful.
Attach the image URL as a parameter to the API domain name, and use GET to get the following return data:
1{
2	"Maker":"SONY",
3	"Model":"ILCE-7M4",
4	"ExposureTime":"1/1250",
5	"FNumber":4,
6	"iso":100,
7	"FocalLength":61,
8	"LensModel":"FE 24-105mm F4 G OSS"
9}
It worked!
The next work is simple, just display the returned data on the interface through useEffect().
The same is true for GPS, except that the location is passed to Mapbox.
The final effect can be seen by visiting all photography and clicking any photography.

Times Have Really Changed

Throughout the process, ChatGPT provided me with ideas and related codes. Although there are often many mistakes, the thinking is correct. You need to correct its misinformation, or go to Google to find out. If you are puzzled, continue to ask about it, and then continue to understand it. This is a positive cycle.
I don't know what AI will replace our jobs with, but it allows me as a designer to do things I didn't even think about before, and I love it.

评论