We're running IoT edge modules. Inside our module, we update bunch of files. We noticed that most of the time, if the host is restarted, the container is restarted and the files we updated still exist.
Very few times, however, we noticed that when the host restarted that the container is re-created from the original image thus all data changes were lost.
Our understanding is that iot edge is using docker restart policy = always which should always keep the data of the container.
I would have next suggestions:
do not store important data on the container writable layer => do not rely on the restart policy
the reason of rebuilding the container could be a new version of your module image which was deployed, so the container was recreated using new image
setup your module deployment manifest (example) properly by using the module container createOptions and attach a local volume to the container (createOptions->HostConfig->Binds), and store your data there. This will survive any recreations of your module container . See example. something like:
"createOptions": {
"HostConfig": {
"Binds": [
"/app/db:/app/db"
]
}
}
Related
I'm trying to have a translator service (https://github.com/LibreTranslate/LibreTranslate) running within either azure container apps(ACA) or azure container instance(ACI). i tried within ACA but the image is too large as ACA only allows images up to 4Gb and the image is about 7Gb. so that didnt work out. so i tried in ACI and there it shows its running, no errors or anything that i can find but when i go to the URL provided after the container is running it gives me nothing except a stream timeout after a while.
does anyone know what i am doing wrong?
i tried following the how to provided by azure and microsoft but this didnt help so far
I have tried to repro the same in my lab environment and got the below results.
Step 1: Clone the repo.
Step 2: To test creating and running an image in the local machine, run the shell script run.sh . Kindly refer to the READ.md file in the source code repo for more information. Once the script ran successfully, Kindly check for images and containers provisioned.
Step 3: Create a tag for image libretranslate/libretranslate and push it to ACR
Step 4: Create an ACI instance with the below configuration. CPU cores: 2
Memory: 2 GiB
Port: 5000, TCP Once Container Instance is provisioned, wait for at least 2-5 mins and then verify :5000
I am building a service which creates on demand node red instance on Kubernetes. This service needs to have custom authentication, and some other service specific data in a JSON file.
Every instance of node red will have a Persistent Volume associated with it, so one way I though of doing this was to attach the PVC with a pod and copy the files into the PV, and then start the node red deployment over the modified PVC.
I use following script to accomplish this
def paste_file_into_pod(self, src_path, dest_path):
dir_name= path.dirname(src_path)
bname = path.basename(src_path)
exec_command = ['/bin/sh', '-c', 'cd {src}; tar cf - {base}'.format(src=dir_name, base=bname)]
with tempfile.TemporaryFile() as tar_buffer:
resp = stream(self.k8_client.connect_get_namespaced_pod_exec, self.kube_methods.component_name, self.kube_methods.namespace,
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
print(resp)
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
out = resp.read_stdout()
tar_buffer.write(out.encode('utf-8'))
if resp.peek_stderr():
print('STDERR: {0}'.format(resp.read_stderr()))
resp.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
subdir_and_files = [tarinfo for tarinfo in tar.getmembers()]
tar.extractall(path=dest_path, members=subdir_and_files)
This seems like a very messy way to do this. Can someone suggest a quick and easy way to start node red in Kubernetes with custom settings.js and some additional files for config?
The better approach is not to use a PV for flow storage, but to use a Storage Plugin to save flows in a central database. There are several already in existence using DBs like MongoDB
You can extend the existing Node-RED container to include a modified settings.js in /data that includes the details for the storage and authentication plugins and uses environment variables to set the instance specific at start up.
Examples here: https://www.hardill.me.uk/wordpress/tag/multi-tenant/
I am working on processing data microservice. I have this microservice dockerized and now I want to deploy it. To achieve it, I am trying to manage containers in Azure Container Instances using Azure Function written in node.js.
The first thing I wanted to test is spawning containers within a group. My idea was:
const oldConfig = await client.containerGroups.get(
'resourceGroup',
'resourceName'
);
const response = await client.containerGroups.createOrUpdate(
'resourceGroup',
'resourceName',
{
osType: oldConfig.osType,
containers: [
...oldConfig.containers,
{
name: 'test',
image: 'hello-world',
resources: {
requests: {
memoryInGB: 1,
cpu: 1,
},
},
},
],
}
);
I've added osType, because docs and interface says it's required, but when I do this I receive error 'to update osType you need to remove and create group containers". When I remove osType, request is successful, but ACI does not change. I cannot recreate whole group upon every new container, because I want them to process jobs and terminate by themselves.
Not all the properties are supported to update. See the details below:
Not all container group properties can be updated. For example, to
change the restart policy of a container, you must first delete the
container group, then create it again.
Changes to these properties require container group deletion prior to
redeployment:
OS type CPU, memory, or GPU resources Restart policy Network profile
So the container group will not change after you update the osType. You need to delete the container group and create it with the changes. Get more details about the Update.
I am using Serilog to write to a file and try to get more information about an error that is occurring in my production cluster...
In my local dev cluster the log files are created fine but they are not created in the VM's on my production cluster. I think this may be security related
Has anyone ever had this?
My production cluster has 5 nodes with a Windows 2016 VM on each
Even more strange is that this works on a single node cluster in Azure
public static ILogger ConfigureLogging(string appName, string appVersion)
{
AppDomain.CurrentDomain.ProcessExit += (sender, args) => Log.CloseAndFlush();
var configPackage = FabricRuntime.GetActivationContext().GetConfigurationPackageObject("Config");
var environmentName = configPackage.GetSetting("appSettings", "Inspired.TradingPlatform:EnvironmentName");
var loggerConfiguration = new LoggerConfiguration()
.WriteTo.File(#"D:\SvcFab\applog-" + appName + ".txt", shared: true, rollingInterval: RollingInterval.Day)
.Enrich.WithProperty("AppName", appName)
.Enrich.WithProperty("AppVersion", appVersion)
.Enrich.WithProperty("EnvName", environmentName);
var log = loggerConfiguration.CreateLogger();
log.Information("Starting {AppName} v{AppVersion} application", appName, appVersion);
return Log.Logger = log;
}
Paul
I wouldn't recommend logging into local files in Service Fabric, since your node may be moved to another VM any time and you won't have access to these files. Consider using another sinks which write to external system (database, message bus or logging system like loggly)
It is likely a permission issue. Your service might be trying to log to a folder where it does not have permission.
By default, your services will run under same user as the Fabric.exe process, that run as NetworkService, you can find more information about this on this link.
I would not recommend this approach, because many reasons, a few of them are:
Your services might be moved around the cluster so your files will be incomplete
You have to log on multiple machines to find the logs
The node might be gone with files (Scale up + Down, Failure, Disk error)
Multiple instances on same node trying to access the same file
and so on...
On Service Fabric, the recommended way is to use EventSource(or ETW) + EventFlow + Application Insights. They run smoothly together and bring you many features.
If you want to use stay on Serilog, I would recommend you use Serilog + Application Insights instead, it will give you move flexibility on your monitoring. Take a look at the Application Insights sink for serilog here.
This was actually user error! I was connecting to a different cluster of VMs than the one my service fabric was connected to! Whoops!
I have this simple node socker seerver as follows:
var ws = require("nodejs-websocket")
var connectionCount = 0;
console.info("Node websocket started # 8002");
var server = ws.createServer(function (conn) {;
console.log("New connection", ++connectionCount);
conn.on("close", function (code, reason) {
console.log("Connection closed")
});
}).listen(8002);
Now I want to hit this server from machines. So to mimic these machines, I am using docker. I want to create around 10 different docker containers which will hit my server.
I want to hit the server from this docker container by using the load testing tool called thor (https://github.com/observing/thor), which can be run as easily as
thor --amount 1000 --messages 100 ws://localhost:8002
So I want to created 10 different docker container and each container should use this tool called thor and hit my server with
thor --amount 1000 --messages 100 ws://localhost:8002
How can I implement such dockor containers.
PS: I am a novice here.
I believe that it should be possible.
There are images available in the docker hub for node of varying size. Choose the appropriate image.
Here are the pseudo instructions to create an image that you needed.
Get the node image
Install thor from git(which you already have the details)
Run the container with your command(Hoping that your websocket app might already be running)
You can do the above in two ways either doing it manually or using Dockerfile.
I believe that you wanted to run in multiple containers, Dockerfile would be good option.
If you can use docker-compose, since multiple containers, it would even better approach.
Hope this is helpful.