I'm creating a user-generated content site using expressjs. How can I add the URL of these generated content to the sitemap and get it done automatically?
It also needs to be removed from these URLs via the sitemap when the user deletes the account or deletes the content.
I tried the sitemap builder npm packages created for express js, but none of them worked as I wanted, or the intended use was not the same as my intended use.
I am unsure if I understood your question, so I assume the following:
Your users can generate new URLs that you want to publish in an sitemap.xml that is returned from a specific endpoint right?
If so I'd suggest to use the sitemap.js package. However this package still needs a list of URLs and the metadata you want to deliver.
You could just save the URLs and the metadata to a database table, the filesystem, or whatever data storage you use. Every time content is generated or deleted you also update your URLs list there.
Now, if someone accesses the sitemap endpoint, URLs are read from storage and sitemap.js generates an XML. Goal achieved!
Related
I am setting up two websites under the same Firebase project for the first time and need some guidance.
The first website is a React app and it already works.
For the second site I created a folder named SecondSite. I also created a subfolder called public under SecondSite. At this point public only contains a static HTML file (index.html) that I am able to serve on the second web site. I did that following a couple of documents and tutorial that I found.
Now this is what I want to do next with my second site, I want it to serve JSON contents that I plan to get from the contents of the realtime database in my Firebase project. What is the minimum I need to do for that?
I believe I need to do some more setting up inside the SecondSite folder (to be able to connect to the DB), like installing some node.js modules, but I am not sure how to go.
I have assets from Wordpress being uploaded to GCP Storage bucket. But when I then list all these links to these assets within the website im working on, I would like the user to automatically download the file instead of viewing it in the browser when the user clicks on the link.
Is there an "easy" way to implement this behaviour?
The project is running with Wordpress as headless API, and Next.js frontend.
Thanks
You can change object metadata for your objects in Cloud Storage to force browsers to download files directly, instead of previewing them. You can do this through the available content-disposition property. Setting this property to attachment will allow you to directly download the content.
I quickly tested downloading public objects with and without this property and can confirm the behavior, downloads do happen directly. The documentation explains how to quickly change the metadata for existing objects in your bucket. While it is not directly mentioned, you can use wildcards to apply metadata changes to multiple objects at the same time. For example this command will apply the content-disposition property in all objects of the bucket:
gsutil setmeta -h "content-disposition:attachment" gs://BUCKET_NAME/**
I want to create a web application using react in which I want to load a page based on the link used, But I know that in react whenever you visit a direct link the entire app is reloaded and you have to navigate to that page using buttons or links provided in the app, I want to generate a temporary link for users which will contain the information about the data to be provided which the back end will check and retrieve from database and provide to the front end, This link will have a duration like for 24 hrs or something and will have an auth token, Can anyone please help me with how I can do that?
React Router is a library that can handle this type of operation and is used in many, many apps. Among other things, it can parse URLs and render different content based on the URL, query parameters, and more.
I have a static web site hosted on Amazon S3. I regularly update it. However I am finding out that many of the users accessing it are looking at a stale copy.
By the way, the site is: http://cosi165a-f2016.s3-website-us-west-2.amazonaws.com) and it's generated a ruby static site generator called nanoc (very nice by the way). It compiles the source material for the site: https://github.com/Coursegen/cosi165a-f2016 into the html, css, js and other files.
I assume that this has to do with the page freshness, and the fact that the browser is caching pages.
How do I ensure that my users see a fresh page?
One common technique is to keep track of the last timestamp when you updated static assets to S3, then use that timestamp as a querystring parameter in your html.
Like this:
<script src="//assets.com/app.min.js?1474399850"></script>
The browser will still cache that result, but if the timestamp changes, the browser will have to get a new copy.
The technique is called "cachebusting".
There's a grunt module if you use grunt: https://www.npmjs.com/package/grunt-cachebuster. It will calculate the hash of your asset's contents and use that as the filename.
I'm building a website on express.js and I'm wondering where to store images. Certain 'static' website info like the team pages will be backed by a database. If new team members come onboard, we push new data to CouchDB and a new team page shows up on the site.
A team page will include a profile picture, which will be stored in CouchDB along with other data.
Should I be sending the image through the webserver or just sending the reference to where the image is and having the client grab the image from the database, since CouchDB is an HTTP server itself?
I am not an expert from Couch DB, but here is my 2 cents. In general hitting DB for every image, is going to increase the load. If the website is going to be accessed by many people, that will be a lot.
Ideal way is serve it with CDN, and have the CDN server point to your resource server/ webserver.
You can store the profile pics (and any other file) as attachments to the docs. The load is the same like for every other web-server.
Documentation:
attachment endpoint /db/doc/attachment
document endpoint /db/doc
CouchDB manages the ETags for attachments as well as for docs or views. Clients which have cached the pics already will get a light-weight 304 response for every identical request. You can try it out with my CouchDB based blog lbl.io. Open you favorite browser developer bar and observe the image requests during multiple refreshes.
Hint 1: If you have the choice between inline-attachment-upload (Base64 encoded in the doc, 1 request to create a doc with attachment) or upload-attachment-only (multipart/related in the original content type, 2 requests to create a doc with attachment or 1 request to create an attachment when the doc already exists) .... then choose the second. Its more efficient handled by CouchDB.
Hint 2: You can configure CouchDB to handle gzip compression by the content-type of attachments. It reduces the load a lot.
I just dump avatars in /web/images/avatars, store the filename only in couchdb, and serve the folder with express.static()
You certainly can use a couchdb attachment
You can also create an amazon s3 bucket and save the absolute https path on your user objects