Sqlite_READONLY error with Electron build in MAC - node.js

I have created a build using electron and npm. The application is using sqlite as a database. The application is ruuning great before creating the build(npm run build). But after creating the build the database become redonly. I have checked the permission by command "ls -asl" but it is showing read/write both the permission to the database file. But when I am trying to insert/update any records is throwing the error "Error: SQLITE_READONLY: Attempt to write a readonly database". I don't know why this is happning. Please provide some help here.

Don't put the database file inside the application installation directory, put it in the directory returned by app.getPath('userData') instead.

The folder the database resides in must have write permissions, as well as the actual database file.
In my case, I had my sqlite file inside database folder as below
database (Initial permission 665)
- app.db (Initial permission 665)
changed above permission as
database (Initial permission 667)
- app.db (Initial permission 666)

Related

nodejs .node-xmlhttprequest-sync-1 file created

I am using google cloud function with nodejs12 runtime and I am getting the following error.
EROFS: read-only file system, open '.node-xmlhttprequest-sync-1'"
nodejs (express.js) is creating a file in same level as index.js which is not permitted (files should be created in /tmp/ in cloud functions)
why is this file created
if this is necessary, how to ensure its created in /tmp
This file is created by the node-XHMLHttpRequest library when the settings.async flag is set to false: https://github.com/driverdan/node-XMLHttpRequest/blob/master/lib/XMLHttpRequest.js#L480
Are you or any functions you're importing using this library? If so, the quickest fix would be to use async requests.

Possible to create file in sources directory on Azure DevOps during build

I have a node script which needs to create a file in the root directory of my application before it builds the file.
The data this file will contain is specific to each build that gets triggered, however, I'm having no luck on Azure DevOps in this regards.
For the writing of the file I'm using fs.writeFile(...), something similar to this:
fs.writeFile(targetPath, file, function(err) { // removed for brevity });
however, this throws an expection:
[Error: ENOENT: no such file or directory, open '/home/vsts/work/1/s/data-file.json']
Locally this works, I'm assuming this has got to do with permissions, however, I tried adding a blank version of this file to my project, however, it still throws this exception.
Possible to create file in sources directory on Azure DevOps during
build
The answer is Yes. This is fully supported scenario in Azure Devops Service if you're using Microsoft ubuntu-hosted agent.
If you met this issue when using microsoft-hosted agent, I think this issue is more related to one path issue. Please check:
The function where the error no such file or directory comes. Apart from the fs.writeFile function, do you also use fs.readFile in the xx.js file? If so, you should make sure the two paths are same.
The structure of your source files and your real requirements. According to your question you want to create it in Source directory /home/vsts/work/1/s, but the first line indicates that you actually want to create file in root directory of my application.
1).If you want to create file in source directory /home/vsts/work/1/s:
In your js file: Use something targetpath like './data-file.json'. And make sure you're running command node xx.js from source directory. (Leaving CMD task/PS task/Bash task's working directory blank!!!)
2).If you want to do that in root of application folder like /home/vsts/work/1/s/MyApp:
In your js file: Use __dirname like fs.writeFile(__dirname + '/data-file.json', file, function(err) { // removed for brevity }); and fs.readFile(__dirname + '/data-file.json',...).

Errors deploying Node.js app

So I am new to IBM Bluemix and all of their products and I am trying to do this project http://www.ibm.com/developerworks/library/ba-muse-toycar-app/index.html . I have done all of the modifying of the car and everything I am just having issues with the codes.
I have a few specific questions on part 2 step 2.b when you are entering in the information for the Cloudant database what information do I put in for the cradle connection and how do I acquire that information.
Second when I go to deploy the app Part 2 Step 2.4 how do I navigate to the application directory? I have looked at the help and googled to no avail. So if we fix these things I am hoping that I will be able to deploy the application. However currently when I go to deploy it I get this error.
cf push braincar
Updating app braincar in org ccornwe1#students.kennesaw.edu / space dev as myemailaddress#gmail.com...
OK
Uploading braincar...
FAILED
Error uploading application.
open /Users/codycornwell/.rnd: permission denied
>>
I am green to all this so any help and explanation to understand it is greatly appreciated! Thanks!
In the tutorial's part 2, step 2.b, you need to specify your Cloudant credentials. There are several ways to get Cloudant credentials, but I'll focus on doing it within the context of Bluemix and the cf command line tool.
You will first need to create a Cloudant service instance, then create a set of service keys (credentials) and then view them.
Create a Cloudant service instance named myCloudantSvc using the Shared plan:
$> cf create-service cloudantNoSQLDB Shared myCloudantSvc
Create a set of service keys (credentials) named cred1:
$> cf create-service-key myCloudantSvc cred1
View the credentials for the service key you just created
$> cf service-key myCloudantSvc creed
With the last step above, you should see output which provides you with the username, password and host values that you'll need to place into your app.js code. It should look something like the following:
{
"host": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com",
"password": "longSecretPassword",
"port": 443,
"url": "https://xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix:longSecretPassword#xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix.cloudant.com",
"username": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-bluemix"
}
For your second question, it looks like you're performing the cf push from your $HOME directory (as mentioned in the comment by #vmovva). By default, the cf push command will send all files in the current directory to Bluemix/CloudFoundry.
Try running the command from the directory where your source code is located to reduce the files pushed to Bluemix. If your source code is intermingled in your $HOME directory, move your source into a different directory and then push from that directory.

Windows Azure - Migrations is enabled for context 'ApplicationDbContext'

Following this blog with steps by steps http://www.windowsazure.com/en-us/documentation/articles/web-sites-dotnet-deploy-aspnet-mvc-app-membership-oauth-sql-database/#setupdevenv
If I run from my local machine then i see the data which is coming from Windows Azure db and i can add and update or delete ... perfectly working fine but the problem is when I publish my application to Windows Azure and I able to see my page and all the static pages are working fine but except one page which is interacting with database.
here is my web.config connection string:
<add name="DefaultConnection" connectionString="server=tcp:* insert server name*.database.windows.net,1433;Database=* insert database name *;User ID=*insert username *#* insert server name*;Password={*insert password here *};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;" />
I get this message when i try to access the page from http://XXXX.azurewebsites.net/Employee
Error Message:
Migrations is enabled for context 'ApplicationDbContext' but the database does not exist or contains no mapped tables. Use Migrations to create the database and its tables, for example by running the 'Update-Database' command from the Package Manager Console.
Seems that your database cannot be created automatically. The fastest way to fix that is to follow the suggestion from your error message. Open your Package Manager Console with the project, which contains the connection string and the Configuration.cs (your migrations), selected as the startup project and run Update-Database. It could be that you must pass some parameters to this command if you have changed something on your migrations.

no such repository on migrating to a new cvs server

I am moving from cvsserv1 to cvsserv2. I am running cvs1.11 on current server on RHEL. I am moving to cvsserv2 which is running ubuntu 12. This is my procedure to port cvs:
zip entire repository on cvsserv1
move zip to cvsserv2
extract zip to /home/users on cvsserv2.
setup cvs service on cvsserve2 in pserver mode.
initialize repository on /home/users/cvsroot by using "cvs -d /home/users/cvsroot init"
connect to cvsserv2 from eclipse using anonymous access to do a test checkout.
I am failing on step6 with the error message "no such repository". What am I doing wrong?
UPDATE
I tried to change the above method, by adopting this http://mazanatti.info/archives/67/ and I was partially successful.
At step 3 (as in that link), after initializing repo on cvsserv2, I copied my repository to /var/lib/cvsd/project1, overwriting CVSROOT folder. Now, after finishing all steps, I was able to connect successfully. However, when I try to check out, I don't see any branches. When I tried to Refresh Tags, I receive the following error:
What is going wrong?
Ok. I figured this one out. For those who might encounter this issue again, here's how I managed to identify and fix it:
Eclipse's cvs client sucks - it doesn't give you much information. (I could be wrong, may be it writes some debug info to eclipse log file - still, I think that error message should have been more descriptive). Anyway, I obtained TortoiseCVS and attempted a checkout and it failed with an error message on the lines of -"failed to obtain dir lock in repository `/home/cvsroot/foo'. This is not the exact message, but it was something like that.
So, all I had to do, was go into my cvs dump from cvsserv1, look for references to that directory (which is a valid path on cvsserv1 but not cvsserv2). I found a reference to it in config file under CVSROOT folder. It was assigned to a property called LockDir. This property was referring to a /home/cvsroot/foo on the older server as a lock directory. All I had to do was comment out this property and restart cvsd. Everything started working just fine after this!

Resources