I am working on a web app that is to be used in an industry with strict compliance requirements, the one I am having trouble with is on signing off something, the user needs to re-input their credentials and it needs to be from the user and not a password manager.
I have tried to implement autocomplete=off and other versions of that as well as changing the type of the password field and it still activates the auto-fill dialogs.
Has anyone managed to overcome this problem at an enterprise scale?
Posting here for others that autocomplete="current-password" seems to suppress the dialog just in chrome
I am adding a webchat(connected to bot framework) as extension in google chrome. I need to capture windows user details by javascript.
Nope, there's no API that can do it. You're welcome to see the list yourself.
There are 2 ideas that come close:
chrome.identity API can provide you the details of the Google user if the user is signed into Chrome - if you just need some sort of identifier. You'd need "identity" and "identity.email" permissions and would need to call chrome.identity.getProfileUserInfo. Interestingly, that produces no permission warnings.
Native Messaging allows you to create a Windows binary that your extension can talk to. Obviously, a native application can do whatever, including getting the signed-in user details, but that requires users to install another component from outside Web Store. In short: it's a big hammer that's likely not worth it for just this.
I am a developer which working on security related area.Recently I
met a little problem. If I got a app such as android .apk or IOS .ipa.
How can I check whether it has malicious actions?
The first thought came to my mind is to check its manifest. To see
which kind of permission it has requested. But this general method can
not detect some actions such as record the screen snapshot or record
user tap position on screen.
Then I switched to search how app store and google play check the
app which submitted by developers. Turns out that they first check the
certificate or signature of the app to make sure it has been published
by trusted organization. Then statically check the system permission
that the app requested.
I guess there must be some in-depth detection method or theory
which used by google and apple to make sure their app is safe to
download. Can anyone provide me some useful information or website
link that I can learn from?
Thank you.
I am trying to understand the Windows Phone 7 application sandbox in detail. So I want to understand things such as:
Does each app. run as its on unique user?
Where is the home (installation. data) directory for each app.?
What are the file system permissions on the application home (installation, data) directory etc.?
I am trying to learn this by writing and running sample code (which prints out the current user, current directory etc.) on the emulator. However, the "Security Critical" and "Trusted application" methods within Silverlight are "turned off" for the windows phone 7 applications.
Following are my questions:
Is there a way to print out the current user name, current directory while running the application within the emulator?
Is there a way to run "security critical" code within an app. in the WP7 emulator? Can I somehow configure in the emulator settings to allow these "security critical" api's or make my application a "trusted application"
Is there any documentation out there that details this sandbox architecture?
I have tried searching but I haven't found any in-depth documentation about the WP7 sandbox architecture which would deal with the above detail I want to understand how WP7 sandbox and security works and is implemented per application.
Thanks,
WinPhone7_Developer
The sandboxing model for applications on the phone means that, 3rd party, applications can't run in the background, can only access IsolatedStorage not a shared file system, and can't directly interact with user data or phone functionality.
Details of the account the application is running as cannot be accessed. You can't even get details of the owner of the phone. The nearest you can get is an anonymized Id of the user of the phone http://msdn.microsoft.com/en-us/library/microsoft.phone.info.userextendedproperties.getvalue(v=vs.92).aspx
No, you can only use APIs in the public SDK.
There is extensive and very good documentation at http://msdn.microsoft.com/en-us/library/ff402535(v=vs.92).aspx
You may be particularly interested in the following sections:
Execution Model for Windows Phone
Isolated Storage for Windows Phone
Security for Windows Phone
In terms of learning about the platform I'd start by learning from the many resources available and which explain what you can do on the phone, rather than attempting to do things you can do on other platforms. (Even ones which are "Windows" platforms.)
When I create an Azure ASP.NET application, by default .NET trust level is Full trust. I always change it to Windows Azure partial trust which is similar to ASP.NET's medium trust level.
You can do it either by using GUI when you select Properties on the Role or by setting enableNativeCodeExecution to false in the definition file (.csdef) just like below:
<WebRole name="ServiceRuntimeWebsite" enableNativeCodeExecution="false">
As a security conscious developer I want by default to run my application in partial trust mode that provides a higher level of security. If I need to use something like Reflection or P/Invoke, as a developer I want to make the decision to lower that trust level by myself.
I'm sure there's a reason why Microsoft decided to use Full trust as a default .NET trust level, I just fail to see it. If you know the reason, or you think you know it, please let me know.
Full trust is not only required for P/Invoke for .NET reflection as well. As a bottom line result, nearly all moderately sized apps need full trust because nearly all widespread libraries need it too (NHibernate for example). Actually, I have been asking from the exact opposite question on the Azure forums too.
The issue of full or partial trust pertains to the environment in which your application runs. The more control and/or "ownership" of the environment and assemblies you have, the more acceptable it is to have full-trust settings.
For example, if you create an Azure web site (July 2012 capability) and, mimicking wordpress or Umbraco, your web site allows arbitrary assembly plugins to be downloaded and installed, then it is important to have a partially-trusted environment. It is possible that one of the plugins downloaded and executed, which you don't control or own, contains malware. Not only does this impact the security and stability of your web site, but some may argue it impacts other (multi-tenant) hosted web-sites which have no relation to yours.
Certainly your web site will rely on 3rd party libraries, such as Log4Net or StructureMap, but those are extremely well-known and vetted libraries that are not in question regarding their security impact. Ergo, if you are running an Azure web-role (a much less "multi-tenant" type affair) and you are merely running such "trusted" 3rd party apps, then there really is not an issue with running as full-trust.
Yes, unfortunately it is still very hard (if not impossible) to write large .NET apps that run in partial trust.
We need much better technology and tools (like CAS.NET)
Because Medium Trust is now officially obsolete. If you start a new web project in Visual Studio, it already requires Full Trust (and doesn't work partial trust). Microsoft says: Do not depend on Medium Trust, instead, use Full Trust, and isolate untrusted applications in separate application pools.
Sources:
Stackoverflow answer: Quoted response ASP.NET team
Microsoft: ASP.NET Partial Trust does not guarantee application isolation
Microsoft: ASP.NET web development best practices