I have two environments. 1 online, the other is local.
online is user trial environment: win server 2016, sql server 2016,
IIS 10.0, authorized organization signed SSL certificate. Acumatica
18.000.0062
Local is development environment: windows 10, sql server 2016, IIS
10.0, Self-signed SSL certificate. Acumatica 18.000.0062
Online environment performs properly consistently, no matter for SOAP
service or Restful service(using oath 2.0).
However, Local worked correctly at the beginning, which means the screen-based API worked properly with the WSDL of each screen's web service be generated correctly at that time, and contract-based API responded correctly using cookies mode. Here comes the problem. After I installed self-signed certificate in order to use oath 2.0, both SOAP and rest APIs are crashed with same error "Object reference not set to an instance of an object".
I have digged by myself, the obvious phenomenon that indicating something went wrong is the "tools--web service" to generate WSDL. Before self-assigned certificate be installed, All screens could be generated correctly, after , all screens encountered the same error
Source File: ...\Frames\WsdlHelp.aspx.cs Line: 9
I don't want to be misleading, but these are the details as much as I could provide. What my point is the online and local environment are almost exactly the same, online always works properly even with SSL on. So my large guess is there is something to do with the self-signed certificate. however, even after turned off the https on local IIS, the problem still exists. SO,
Is there anyone can give me some help on this headache issue?
Unfortunately, this is a known issue with Acumatica and .net 4.7.2. It was fixed in 2017R2 Update 8 and 2018R1 Update 2. I'm afraid you have no choice but to either downgrade .net to 4.7.1 or upgrade Acumatica to one of those versions.
Related
Error: java.io.IOException: Could not transmit message
Issue details: We are running our application with Jboss AS 5.1 and OpenJDK 7 (version 1.7.0_261), servers are Red Hat Linux CentOS 5.
We have a legacy application that makes several web service calls to NetSuite, after the recent NetSuite update of obsoleting the old cipher suites all our calls started failing. TLSv1.2 protocol is enabled (with -Dhttps.protocols=TLSv1.2 in run.conf), since it is Java 7 we added bouncy castle security jars to increase the supported cipher suites (as recommended in this comment by Igor: https://stackoverflow.com/a/49154932/2308058), with this, we were able to get the REST Web Service calls working but we are getting the error - org.bouncycastle.tls.TlsFatalAlertReceived: internal_error(80) for SOAP WS calls.
Other things we tried but nothing seem to bring us luck yet:
Explicitly adding cipher suites that are supported by NetSuite in run.conf with -Dhttps.cipherSuites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Adding TLS protocol with tls.client.protocol in run.conf - -Djdk.tls.client.protocols=TLSv1.2
Added self-signed cert to keystore
Added NetSuite's cert to Java cacerts
SOAP calls are working fine with Java 8 but moving this legacy application to Java 8 and Wild Fly is a very heavy lift so we are looking for alternative options.
Any suggestions on getting this resolved would be very helpful, please! TIA!
TlsFatalAlertReceived means a fatal alert was received from the peer (i.e. NetSuite failed). internal_error would usually mean that something went wrong in the implementation itself, rather than any configuration match like cipher suites, however I don't know how careful NetSuite is about its choice of alerts. In any case, apart from guessing what the issue is, the real next step is to look at the NetSuite server logs to find what's failing.
I have an ASP.NET Core app running on my local machine. I'm trying to test that app via some code I wrote in Node.js. In that code, I'm using Axios. The following code generates an error that says: "Unable to verify the first certificate". The code is this:
let result = await axios.get('https://localhost:5001/');
I have seen several solutions posted on SO, however, none explain the issue. I don't understand 1) what's safe (we're dealing with certificates here) and 2) where the change needs to be made (i.e. in the Node app or in the ASP.NET Core app, or even changes on both sides).
How do I safely allow Node.js to access the ASP.NET Core app running locally on my machine?
Thank you!
ASP.NET Core will be using a self-signed certificate after installing it. Since it is self-signed, there is no reason for it to be trusted. And you need to trust it manually or suppress this error. (Which means you need to accept untrusted certificates while requesting)
Since in production environment, you can use your own certificate which might from a trusted provider like Let's encrypt, you don't need to worry about that issue.
Questions:
what's safe (we're dealing with certificates here)
"Safe" for the HTTPS means that it needs to verify the server-side certificate is always trusted.
where the change needs to be made (i.e. in the Node app or in the ASP.NET Core app, or even changes on both sides).
No there doesn't need to change anything. Just trust the self-signed certificate is fine.
To trust the certificate run 'dotnet dev-certs https --trust' (Windows and macOS only).
Learn about HTTPS: https://aka.ms/dotnet-https
I have an Azure App Service with Stack General setting ASP.NET v4.7. However, in the web.config target runtime version is 4.5.1.
I have another Azure App Service which hosts a Linux container.
The first one is set to use TLS 1.2.
I am initiating requests from .NET App(first) to Linux container(second).
When the Second one, Linux container is set to use TLS 1.2 requests fail, but when set to TLS 1.0 requests process successfully.
Has anyone experienced this issue?
From the post it is tricky to narrow down on your issue but a couple of things you can try.
Ensure no hard coding of TLS version. On the contrary, you can try hardcoding the TLS version with ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;. Try to run. If it works, you'll have the clarity that there is not other issue and you need to switch to right .NET version.
Target your .NET Framework version 4.7 and above. You say your target runtime version is 4.5.1 which won't help sending requests with latest TLS version by default.
<system.web><httpRuntime targetFramework> in web.config should show the intended version of the .NET.
Last but not least it is always good to patiently read the doc and you might figure out the issue yourself.
I have set up a plain SignalR solution using .NET Core 2.2. I cannot use Core 3.x because I need to use some libraries that depends on 2.2.
It works fine when i debug locally.
However, when I deploy to Azure as an App Service, whenever the JS tries to establish a connection, I can see in Chrome's network tab that it fails when calling /hub/negotiate, and that it returns a 400.
This does not happen locally. Locally I'm running IIS Express, and the server in Azure is running Kestrel.
What I have tried thus far:
Made sure web sockets is enabled under configuration in Azure. It was already enabled.
I believe Kestrel is behind a proxy in Azure, so I added app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto }); at the very top. This had no effect.
I'm not using MessagePack.
Using:
netcoreapp2.2
#aspnet/signalr 1.1.4 (is this correct for 2.2? it works locally)
What could be wrong?
I can tell you clearly that there is no problem using signalr 1.1.4 in .net core 2.2.
It can run normally when deployed in windows and linux. You can download my test code from github, and compare with your project, is there any modification that caused this error?
The proxy settings are not used.
The only thing to note is:
I found that when the project is deployed under linux, it takes a while to start normal operation, about three to five minutes. The reason is currently unclear. If you encounter it, you can raise a support ticket on the portal.
I'd like to incorporate security features in my standalone XULRunner app. Specifically, I'd like to use security certificates to validate the app executable as downloaded by a user. From what I've seen, its called code signing. But I'm very green in this area. Any pointers on how to proceed? Thanks in advance.
The certificate functionality built into XULRunner isn't meant to validate signatures of Windows executables - you would need to use Windows functions for that (e.g. via js-ctypes). Not going to be simple however, here you can see how that check works in C++ code.
However, if you are merely downloading an update to your application then maybe using an HTTPS connection would be sufficient - the origin of the executable is verified then (won't help you if that server is hacked however).