Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Project516/BlogAPI/llms.txt

Use this file to discover all available pages before exploring further.

The POST /blogs/cache endpoint initiates a fresh scrape of the project516.dev blog and replaces the server’s in-memory cache with the newly retrieved posts. Once the scrape completes, the updated data is also written to /tmp/cache.json so it survives server restarts. This is the only way to surface new blog posts through the API — GET /blogs, GET /blogs/latest, and GET /blogs/search all read from this cache.

Endpoint

POST /blogs/cache

Rate limit

1 request per minute per IP address. Exceeding this limit returns a 429 Too Many Requests response.

Request body

This endpoint accepts no request body.

Response

On success, the endpoint returns a JSON object confirming the cache was updated.
message
string
Confirmation string. Always "Blogs cached successfully" on a successful scrape.

Code examples

curl -X POST https://api.project516.dev/blogs/cache

Example responses

Success:
{
  "message": "Blogs cached successfully"
}
Scrape failure (HTTP 500):
{
  "detail": "Error occurred while scraping blogs: Connection timeout"
}
Only 1 call is allowed per minute. New posts published to project516.dev will not appear in any endpoint until you call this and receive a successful response.
This endpoint fetches the blog HTML directly from the GitHub-hosted source at https://raw.githubusercontent.com/Project516/project516.github.io/refs/heads/master/blog.html, parses the result, and writes the cache to /tmp/cache.json.

Error codes

StatusDescription
429 Too Many RequestsYou have exceeded the rate limit of 1 request per minute. Wait before retrying.
500 Internal Server ErrorThe scrape failed. The detail field in the response body contains the underlying error message. The existing cache is left unchanged.

Build docs developers (and LLMs) love