I'm trying to figure out how to work effectively with data working through Netlify CMS
My site will be on NextJS
For example I create a collection Posts collection type Folder
I am entering data through netlify cms, several files are being created in the github repository.
And now I need to get a list of posts with title, preview image, and a short description.
Question 1: What is the best way to get data from the Netlify collection that is stored in the github repository?
Question 2: How can I read list posts without reading every file?
I assume that you need to use a build hook and cache the data. But I don't have a concrete solution yet.
Related
I'm working with Kentico EMS 12's content staging feature. I have a number of pages I've attempted to sync to a new environment. It seems everything goes well except in circumstances where I have a page type that has a reference to another document (e.g. I might have a page with a web part containing a reference to a particular form that shows in a modal popup). It seems those references are blank in the destination environment, and I'm forced to re-select them across the board. Is there any particular approach to using the staging feature that would prevent this from happening?
Welcome to SO Mike!
FYI, no need to cross post questions on SO AND the DevNet. As long as you tag your SO posts with kentico they will automatically be brought into DevNet.
You need to have the objects (page types, transformations, templates, widgets, page templates, etc.) in the new environment first before you can successfully sync pages over. Pages have far too many dependencies on objects. The sync mechanism does not automatically sync those objects over based on a page so many reasons. So make sure any objects associated/related with that page are actually created/synced to the new environment FIRST. Once they exist in that new environment, then you should be set. If you make updates to those objects, no worries simply because the IDs already exist and that's what the page is looking for.
I need to make some text changes to over 100 blog posts migrated from a Wordpress site. I was going to do this via SQL to update rows in the Common_BodyPartRecord table.
When I update the rows the changes are not reflected in the front end. I understand Orchard uses NHibernate, is there some sort of caching I am not aware of?
I know that you are advised to not mess about in the database, so is there a better way to do some bulk text manipulation? If necessary I can generate an Orchard module and do this via a database migration.
NB All blog posts are latest and published.
It's not exactly caching per se, it's that the body part is also stored on the Infoset, which is a blob of XML that you can find on the content item records. You need to change both.
I'm looking to use Redmine for document management. I know that Redmine is not ideal for this task but there is already a lot of content on the site so I'd like to utilize it if possible.
Redmine currently does not a have great documents module. The files we've uploaded look to be amended on that specific page and it doesn't seem to be able to move to another page (unless you download and re-upload to the proper page).
Idea 1
I see there is a Files section, which could work as a central repository (and you can upload document based on release) however, is there a way to set up a nice-looking 'front-end' page that automatically updates based on new submissions to the Files tab? I envision this front end to be a simple wiki page with the document name, a short description and a links to the file posted in the Files tab.
There are so many documents uploaded to varying pages on the Redmine site. I would only do the whole download and re-upload of files if there was a way to automatically update the 'front end' wiki.
Idea 2
I see there is a DMSF plugin for Redmine. Has anyone used this before and has is solved document management issues? I'd like to hear your feedback. Even if DMSF doesn't totally solve my issue, anything is better than what I have now.
Thanks!
In my opinion DMSF module is a perfect companion for Redmine. We have adopted it in our company. You can easily deal with document versions, webdav access, custom approval workflow, document modifications notification with the extra value of being well integrated with Redmine features (roles, dynamic links in Wiki and issue text and notes).
I am beginning sharepoint development and have some quick questions concerning basic terms.
How do i find out whether a particular site is a site collection, or a site JUST BY THE URL? Is their a powershell command to do this?
I was creating some sites in sharepoint. Some sites were appended with /sites/sitename whereas others were just under the base url of sharepoint. What is the difference between the 2? AND, how do i recreate the ones under the sites node? For some reason, I cant find the option to create under the sites node again. Please explain this concept as all msdn tutorial are very confusion for beginners like me. Those are good once you get the hang of basics.
Please provide an analogy how to understand web app, site collection, site, web site, etc.
Is there a way to use NEWFORM.aspx for a document library instead of UPLOAD.aspx?
The Site collection is at the root level of your Web application.
So http://abc.com/ => Site collection
Using Powershell, open the Sharepoint Powershell prompt and run Get-SPSite to get all Site-Collections
the /sites/ is called as a managed path
It can be defined in the Central Administration for every web application.
The option to select the /sites will be available only when you create the second site collection under the Web Application (The first one take the / by default.)
Have a look at Technet Article
document library is for uploading file, not for storing user submitted data, for that you need to create a list
1) Document Set is used in cases where multiple documents have the same properties, its like putting all these documents in a folder and then providing attributes to that folder which are in turn applied for each document in that folder.
In your case, if all the files have the same values for the 8 fields then the document set is the correct way to go.
2)If there is additional metadata associated with the files then these can be added either to the content type (eg. document or document set content type) or to the columns in the library itself, you dont need to create a separate list for holding that data. Adding data to the content type ensures consistency across all the document libraries within that site collection, adding columns to the library affects only that library.
I'm trying to build a e-commerce site for a client and it isn't quite as straightforward as I had hoped. I am using Magento Community and my client has a wholesaler who provides a data feed (over 5000 products). In order to make this data feed compliant with Magento (and its attributes), I have edited some column headings in Excel and successfully uploaded as a CSV file.
My issue is that the wholesaler regularly renews the data feed automatically. When this happens, I am assuming my tweaking of the spreadsheet will be overruled, making my now Magento compatible CSV file useless again.
My question then is how can I make the wholesaler data feed compliant with my revised version so I don't have to continually rename elements? Is this possible?
I apologise if this sounds very stupid but I am more seasoned to static website builds.
Thank you in advance.
Does your webserver have access to the newly produced updated automatic data feed ?
IF so, just fetch the feed, modifty it to be complaint with the magento data feed then find the magento 'processfeed' or whatever function, and run it against the file ?
If your webserver cannot automatically fetch the file, can the pc making the feed automatically post the data to the webserver ?
IF neither are possible so far, then there will always be a manual person who has to 'reupload the feed'.. in which case, just make a simple page for them, 'Upload feed', or modify the magento page, and let them upload their standard feed, reformat it, then process the reformatted feed.
You would need to write a script to transform wholesaler data feed into magento format so you do not need to rename manually every time.