Contentful Content Migration API try/catch on field creation? - contentful

We are trying to setup a workflow for delivering content model changes to our other environments (stage & prod).
Right now, our approach is this:
Create a new contentful field as a migration script using Contentful CLI.
Run script in local dev to make sure the result is as desired using contentful space migration migrations/2023-01-12-add-field.ts
Add script to GIT in folder migrations/[date]-[description].js
When release to prod, run all scripts in the migrations folder, in order, as part of the build process.
When folder contains "too many" scripts, and we are certain all changes are applied to all envs, manually remove the scripts from GIT and start over in an empty folder.
Where it fails
But, between point 4 & 5 there will be cases where a script has already been run in an earlier release, and that throws an error:
I would like the scripts to continue more gracefully without throwing an error, but I cant find support for it in the space migration docs. I have tried wrapping the code in try/catch without any luck.
Contentful recommends using the Content Migration API in favour for Content Management API since it is faster. I guess we could use the Content Management API, but at the same time we want to use "best practise".
Are we missing something here?
I would like to do something like:
module.exports = function (migration) {
// Create a new category field in the blog post content type.
const blogPost = migration.editContentType('blogPost')
if (blogPost.fieldExists('testField')) {
console.log('Field already exists')
} else {
blogPost.createField('testField').name('Test field').type('Symbol')
}
}

Related

How to change Netlify CMS for Strapi CMS?

I'm new to this frontend world, I have some knowledge on React and GraphQL and that's why I've decided to try and implement a test blog with Gatsby, as it seems pretty popular and easy to use.
I also wanted to get my hands into Material UI so I'm using this Gatsby starter : https://www.gatsbyjs.org/starters/Vagr9K/gatsby-material-starter
This starter seems to have included the integration with Netlify CMS, so I wanted to change that and start using Strapi CMS, so I can have the content there.
Any idea on how to do this?
There's a lot of stuff in your question, I'll try to answer it step by step if not, please let me know if you need more details of how to create pages, etc and I will update my answer to add more details if needed.
If you want to change your source from Netlify to Strapi you need to set it up in your gatsby-config.js, replacing gatsby-plugin-netlify-cms plugin for something like that:
{
resolve: `gatsby-source-strapi`,
options: {
apiURL: `http://localhost:1337`,
queryLimit: 1000, // Default to 100
contentTypes: [`article`, `user`],
//If using single types place them in this array.
singleTypes: [`home-page`, `contact`],
// Possibility to login with a strapi user, when content types are not publically available (optional).
loginData: {
identifier: "",
password: "",
},
},
},
Note that you'll have to install your desired plugins and remove the unnecessary in order to reduce the bundle package and improve performance when using starters.
The next step is to create pages from your source CMS (articles, posts, pages, etc) using GraphQL. Maybe this blog helps you. But as a short summary, you need to create queries in your gatsby-node.js to retrieve data from Strapi CMS and create pages using Gatsby's API.
The idea is the same as from your starters, however, instead of using gatsby-source-filesystem and using allMarkdownRemark in your page creation, you will use the object provided by Strapi CMS. You can check the queries and the available objects using gatsby develop and entering to localhost:8000/___graphql.
Keep in mind that you will always query static data (i.e: pre-downloaded data) from your multiple sources so when you run the develop command, the data is downloaded and accessible via GraphQL.
You can check for further information in its starter repository.

files in server root not loading properly in Azure

I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.

Perform a task on Azure batch

I am new to Azure batch. I have to perform a task on nodes in the pool.
The approach I am using is that I have the code that I want to run on the node. I am making a zip of the jar of the .class file and uploading to my Azure storage account and then getting the application Id and putting it in ApplicationPackageReference and adding to my task in job.
String applicationId= "TaskPerformApplicationPack";
ApplicationPackageReference reference = new ApplicationPackageReference().withApplicationId(applicationId);
List<ApplicationPackageReference> list = new ArrayList<ApplicationPackageReference>();
list.add(reference);
TaskAddParameter taskToAdd = new TaskAddParameter().withId("mytask2").withApplicationPackageReferences(list);
taskToAdd.withCommandLine(String.format("java -jar task.jar"));
batchClient.taskOperations().createTask(jobId, taskToAdd);
Now when I run this, my task fails giving an error that
access for one of the specified Azure Blob(s) is denied
How can I run a particular code on a node using azure batch job tasks?
I think a good place to start is: (I have covered most of the helpful links along with the guided docs below, they will elaborate the use of environment level variable etc, also I have included few sample links as well.) hope material and sample below will help you. :)
Also I would recommend to recreate your pool if it is old which will ensure you have the node running at the latest version.
Azure batch learning path:
Samples & demo link or look here
Detailed walk through depending on what you are using i.e. CloudServiceConfiguration or VirtualMachineConfiguration link.
Further to add from the article: also look in here: Application Packages with VM configuration
In particular this link will take you through the guide process of how to use it in your code: Also be it resource file or package you need to make sure that they are uploaded and available for use at the batch level.
along with sample like: (below is a pool level pkg example)
// Create the unbound CloudPool
CloudPool myCloudPool =
batchClient.PoolOperations.CreatePool(
poolId: "myPool",
targetDedicatedComputeNodes: 1,
virtualMachineSize: "small",
cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4"));
// Specify the application and version to install on the compute nodes
myCloudPool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference {
ApplicationId = "litware",
Version = "1.1" }
};
// Commit the pool so that it's created in the Batch service. As the nodes join
// the pool, the specified application package is installed on each.
await myCloudPool.CommitAsync();
For the Task level form the link above a sample is: (make sure you have followed the steps correctly mentioned here.
CloudTask task =
new CloudTask(
"litwaretask001",
"cmd /c %AZ_BATCH_APP_PACKAGE_LITWARE%\\litware.exe -args -here");
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "litware",
Version = "1.1"
}
};
further to add: be it CloudServiceCOhnfiguration or VirtualMachineConfiguration, An application package is **a .zip file** that contains the application binaries and supporting files that are required for your tasks to run the application. Each application package represents a specific version of the application. from reference: 4
I gave it a shot and tried and was successful, so I am not able to replicate the error above and seems like you might be missing something.

How to handle Contentful content data in Gatsby

I'm interested in using Gatsby to build a Netlify static site using content from Contentful
Netlify has this nice gettting started Gatsby guide:
​​https://www.netlify.com/blog/2016/02/24/a-step-by-step-guide-gatsby-on-netlify
But I'm a bit unsure of how to bring Contentful into the mix. Do I need to write scripts to convert my Contentful content into Gatsby 'markdown'?
Any ideas, ideas, links appreciated!
Since this question was posted, an official Contentful plugin's been added to Gatsby's collection (official as in created by Gatsby team, not Contentful): https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-source-contentful
An example site's src code is here: https://github.com/gatsbyjs/gatsby/tree/master/examples/using-contentful
The plugin processes markdown via gatsby-transformer-remark and produces the resultant HTML, which you can access via Gatsby's GraphQL server w/ a query like this one from the example proj:
contentfulProduct(id: { eq: $id }) {
productName {
productName
}
productDescription {
childMarkdownRemark {
html
}
}
price
}
You can use the plugin to connect both to the Content API (for published assets/content) and/or the Preview API (for both published and draft content/assets).
We use NODE_ENV to pull from the Preview API in dev and the Content API in production.
Here's the script I am using to pull down data from Contentful:
https://gist.github.com/ivanoats/e79ebbd711831be2536d1650890055c4
I run this via an npm run script before gatsby build.
I would love to work on a plugin or get ideas on better architecture for this process.
I wrote a post on this architecture in more detail on the Aerobatic blog
Right now the best option is to write a script which syncs content from Contentful to your Gatsby site's pages directory.
There's plans however for adding support within Gatsby to make this happen semi-automatically. Early days still here! See this issue for more https://github.com/gatsbyjs/gatsby/issues/324

jenkins: setting root url via Groovy API

I'm trying to update Jenkins' root URL via the Groovy API, so I can script the deployment of a Jenkins master without manual input (aside: why is a tool as popular with the build/devops/automation community as Jenkins so resistant to automation?)
Based on this documentation, I believe I should be able to update the URL using the following script in the Script Console.
import jenkins.model.JenkinsLocationConfiguration
jlc = new jenkins.model.JenkinsLocationConfiguration()
jlc.setUrl("http://jenkins.my-org.com:8080/")
println(jlc.getUrl())
Briefly, this instantiates a JenkinsLocationConfiguration object; calls the setter setUrl with the desired value, http://jenkins.my-org.com:8080/; and prints out the new URL to confirm that it has changed.
The println statement prints what I expect it to, but following this, the value visible through the web interface at "Manage Jenkins" -> "Configure System" -> "Jenkins URL" has not updated as I expected.
I'm concerned that the value hasn't been update properly by Jenkins, which might lead to problems when communicating with external APIs.
Is this a valid way to fix the Jenkins root URL? If not, what is? Otherwise, why isn't the change being reflected in the config page?
You are creating a new JenkinsLocationConfiguration object, and updating the new one, not the existing one being used
use
jlc = JenkinsLocationConfiguration.get()
// ...
jlc.save()
to get the one from the global jenkins configuration, update it and save the config descriptor back.
see : https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/JenkinsLocationConfiguration.java

Resources