Second bind() call failed with license expire for Non persistence license type in Playready - drm

Second bind()call failed with license expire for Non persistence license in Play ready,
I am working to support one service:
Play back sequence of service is below:
do WI.
Call Bind()--> failed with License not found
do LA (Acquire License).
call Bind() -- success
call commit -- success
call Manifest URL
Player tries to play the content.
found its encrypted.
Bind()--> failed with License expired.
My Question:
why second Bind() is failed with License expired?
License type from service provider is Non persistence.
Is there any other reason behind this for License expire?
On what bases microsoft playready will give license expired for non persistance license type?
Please help me regarding this.

Nonpersistent licenses are only usable for one playback, not until the application is restarted. As far as the PlayReady Device Porting Kit is concerned, one playback is equal to one Drm_Reader_Bind() call. This is why your second call fails.
While the information about license persistence is public, any more in-depth information is NDA-protected and I cannot discuss it on a public website. If you need further help and can prove that you work for a PlayReady licensee, feel free to contact me for a one-on-one chat via saares#axinom.com.

Related

Azure AD device flow verification_url

Consider this Azure AD OAuth 2.0 device flow grant request:
POST https://login.microsoftonline.com/common/oauth2/devicecode
Content-Type: application/x-www-form-urlencoded
client_id=12345678-1234-1234-1234-123456789012
&grant_type=device_code
&resource=https://graph.microsoft.com
(skipped urlencoding for readability)
According to this draft, response should include a verification_uri parameter:
verification_uri
REQUIRED. The end-user verification URI on the authorization server. The URI should be short and easy to remember as end-users will be asked to manually type it into their user-agent.
{
"device_code": "GMMhmHCXhWEzkobqIHGG_EnNYYsAkukHspeYUk9E8",
"user_code": "WDJB-MJHT",
"verification_uri": "https://www.example.com/device",
...
However the response from Azure AD contains
verification_url instead (note url instead of uri):
"verification_url": "https://aka.ms/devicelogin"
Is this just a typo in Azure AD's Device Flow implementation?
Should i take both variants as valid? Is this being renamed to verification_url in the next draft?
One additional question, can i request device flow grant from an Azure AD v2 endpoint?
The token endpoint seems to exist as /common/oauth2/v2.0/token, but its code request counterpart returns 404, /common/oauth2/v2.0/devicecode.
There is a /common/oauth2/devicecode, but i'm unable to use it later with /common/oauth2/v2.0/devicecode (immediately returns AADSTS70019 Verification code expired.).
It's probably not a typo. The IETF draft (that you referred to) is backed by both Google and Microsoft. But both companies implemented it without regard to this difference, namely "verification_uri" vs. "verification_url".
Google came first. They implemented the device flow years ago. I'm not sure of the exact date of first publication, but it was already available in 2012. And they used "verification_url" from the start! The IETF draft's first version dates back to 2015 and for some reason the Google team responsible for the draft made the decision to use "verification_uri" despite the fact that their own implementation already used "verification_url" for years. And they never changed neither the draft, nor their implementation. They use "verification_url" in their documentation as well.
https://developers.google.com/identity/protocols/OAuth2ForDevices
https://developers.google.com/identity/sign-in/devices
Facebook on the other hand uses the draft's version for the field name, i.e. "verification_uri". Check out their documentation (and the implementation is aligned with the doc): https://developers.facebook.com/docs/facebook-login/for-devices
I've yet to find an official documentation for Microsoft's (i.e. Azure's) device flow implementation, but here're the few posts/articles that are about this subject and are on a *.microsoft.com domain:
https://blogs.msdn.microsoft.com/azuredev/2018/02/13/assisted-login-using-the-oauth-deviceprofile-flow/
https://azure.microsoft.com/en-us/resources/samples/active-directory-dotnet-deviceprofile/
The latter is accompanied by a GitHub repo: https://github.com/Azure-Samples/active-directory-dotnet-deviceprofile
And here're a few non-MS sources:
https://www.jkawamoto.info/blogs/device-authorization-for-azure/
https://tsmatz.wordpress.com/2016/03/12/azure-ad-device-profile-oauth-flow/
Actually the latter one (it's in Japanese) is the first detailed example of Azure's device flow implementation that I could find. :-) And it has "verification_url" as well.
As for your "additional question" ("can i request device flow grant from an Azure AD v2 endpoint?"), I've no idea. Microsoft's device flow implementation is not even official(ly supported yet, at least the lack of documentation suggests this), so it's subject to change.
The v2.0 protocol pages do not mention the "devicecode" endpoint either.
See:
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols-oauth-code
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-limitations
https://learn.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-compare
So for now I suggest not to build anything production-like on Azure's device flow.

User specific version of extensions from Chrome Web Store

I've been developing and maintaining a Chrome extension for my company where each customer would be assigned a unique ID in the code. We've been using the ID to determine license status and login to our services (paid extension with monthly subscription fee).
So far we've hosted the extension files ourselves and had unique update URLs for each customer extension. This has been nice and simple; go to our website, click install and you're done. With the latest Chrome release, however, that installation procedure has been thwarted by Google since they now require users to install extensions by dragging and dropping the CRX files into the chrome://chrome/extensions/ tab. Unless of course your extension is available through Chrome Web Store - which leads me to the problem:
We don't want the drag and drop CRX installation - requires Web Store.
We don't want multiple versions of the extension (one for each customer) on the Web Store since that's a maintenance hell every time we update the extension.
We don't want to use Web Store licensing because:
It requires OpenID login.
We sell the extension to schools with many students where the school pays the bill - not the student.
We don't want to lock our payment method to one browser, i.e. we want to be able to maintain licensing and payment through our or servers.
We don't want to have users input a license key since that's too much of a risk with several thousand students having to input the key - also it requires some kind of storage (cookies/localStorage) which would eventually get cleared requiring the license key to be input again.
I'm not 100% certain that my statements are completely correct, so feel free to enlighten me if I missed something.
If they are, the question is whether or not we can somehow tailor the extension for each customer through the Web Store (using the unique ID) without needing to publish one extension per ID?
As a side question any answers that might solve the problem with another method will also be accepted.
For the answer below, I assume your app is a packaged app, not a hosted app.
I have a solution that's fairly similar to your current implementation, but adds one extra step for users. For the student user, the process will work like this:
Download the app from the Web Store. The app does not function yet, and launching it just displays a "Please click the activation link provided by your school/institution" message.
Click a link hosted on your server (i.e., the server where you used to host the update URL) that looks like https://myserver.com/activateapp.php?custid=123456789. You host one such link for each institution you support, and it is the institution's job to provide its link to its students. This link activates the app.
From an implementation point of view, here's how it works:
Host a page, https://myserver.com/activateapp.php, on your server. Server-side, check that the custid parameter is valid. If it is not, send a 404 error.
Your app has a content script that is injected into https://myserver.com/activateapp.php that scans the URL and picks out the customer ID. Once the app finds the ID, it stores it in localStorage. Since invalid customer IDs produce a 404 error, you know that when the content script runs, the page is not a 404 error; therefore, it is reading a valid customer ID.
Any time the app wants to query your services, it checks if it has a customer ID in localStorage. If it does, it uses that ID; if it does not, it displays a message that the app has not been activated yet. Packaged apps will never have their localStorage erased unless your app is programmed to wipe its own storage, or the user does it from the console. Storage erasure will never "accidentally" happen. Even the strongest browser-wide data/cache purge will only clear localStorage from Web pages, not from apps and extensions.
For extra security -- if you don't want people randomly guessing customer IDs -- you can add an extra signature parameter, like https://myserver.com/activateapp.php?custid=123456789&sig=2464509243. This extra parameter is some server-verified transformation of the customer ID (ideally a cryptographic signature or a purely random value associated with the ID in a database) that is impossible for anyone to guess. When the request for activateapp.php hits the server, it checks for a valid customer ID and a valid corresponding signature. Of course, this doesn't stop people who have legitimate access to a valid link from sharing the link to unauthorized people, but I expect that was a vulnerability that existed in your old system anyway.

J2ME push registry startup permission

I made an application that uses Push Registry. When I try the application it does not work properly because of the permissions. Then I found out if I sign the application I can reach the always allow option.
But when I try application after signing at Samsung Omnia2 i8910, I click always allow then an alert come up saying it will change to only for this session. Anybody knows why this is? or how can I solve this?
Note: I use java verified R&D signing, and when I try to load at my Nokia 5800 it doesn't load because of certificate error. I don't know what I do wrong. I can load to Samsung.
After some research, I found an article about this problem. This problem occurs because of j2me security polities. J2ME doesn't allow to set always allow for auto start permission and network access. And that cannot be done with signing application. Article says that permissions are mutually exclusive.
"Additionally, the Blanket setting for Application Auto Invocation and the Blanket
setting for Net Access are mutually exclusive. This constraint is to prevent a MIDlet
suite from auto-invoking itself, then accessing a chargeable network without the user
being aware. If the user attempts to set either the Application Auto Invocation or the
Network Function group to "Blanket" when the other Function group is already in
"Blanket" mode, the user MUST be prompted as to which of the two Function groups
shall be granted "Blanket" and which Function group shall be granted "Session"."
ref: http://jcp.org/aboutJava/communityprocess/maintenance/jsr118/MIDP_2.0.1_MR_addendum.pdf
Note: I use java verified R&D signing, and when I try to load at my Nokia 5800 it doesn't load because of certificate error. I don't know what I do wrong. I can load to Samsung.
R&D signed midlet should be run on device with rolled back date, because R&D certificate is given for 7 past days from the date of signing. For example: date of signing is 19.08.2012, work period is from 12.08.2012 to 18.08.2012

J2ME: Set security permission programmatically

I have created a J2ME app and added it as jar in another app. The original app runs with maximum permission and works fine, but when I add it as jar in the 2nd app, I get security exception while making a web service call, and I noticed the app is running in minimum security.
I have added the midlet permissions for http and https in the JAD as well.
javax.microedition.io.Connector.http, javax.microedition.io.Connector.https
Any idea on how to fix this? The error I get is as below:
java.lang.SecurityException: Application not authorized to access the restricted API
at com.sun.midp.security.SecurityToken.checkForPermission(+459)
at com.sun.midp.security.SecurityToken.checkForPermission(+15)
at com.sun.midp.midletsuite.MIDletSuiteImpl.checkForPermission(+20)
at com.sun.midp.dev.DevMIDletSuiteImpl.checkForPermission(+28)
at com.sun.midp.dev.DevMIDletSuiteImpl.checkForPermission(+7)
at com.sun.midp.io.ConnectionBaseAdapter.checkForPermission(+67)
at com.sun.midp.io.j2me.http.Protocol.checkForPermission(+17)
at com.sun.midp.io.ConnectionBaseAdapter.openPrim(+6)
at javax.microedition.io.Connector.openPrim(+299)
at javax.microedition.io.Connector.open(+15)
at org.ksoap2.transport.ServiceConnectionMidp.<init>(+11)
at org.ksoap2.transport.HttpTransport.getServiceConnection(+11)
at org.ksoap2.transport.HttpTransport.call(+51)
at com.vxceed.xnappexpresssync.comm.WebserviceCall.call(+28)
at com.vxceed.xnappexpresssync.comm.WebserviceCall.callServiceMethod(+112)
at com.vxceed.xnappexpresssync.utility.Generic.sendRequest(+22)
at com.vxceed.xnappexpresssync.main.Authentication.authenticateUser(+77)
at app.ui.ServerSync.sendServerRequest(+127)
at app.ui.LoginScreen.authenticateUser(+9)
at app.ui.LoginScreen.isLoginValidate(+76)
at app.ui.LoginScreen.keyPressed(+48)
at app.ui.MainAppScreen$Clean.run(+33)
at java.util.TimerThread.mainLoop(+237)
at java.util.TimerThread.run(+4)
As Jonathan Knudsen states in "Understanding MIDP 2.0's Security Architecture":
The MIDP 2.0 specification defines an open-ended system of
permissions. To make any type of network connection, a MIDlet must
have an appropriate permission. For example, a MIDlet that uses HTTP
to talk to a server must have permission to open an HTTP connection.
The permissions defined in MIDP 2.0 correspond to network protocols,
but the architecture allows optional APIs to define their own
permissions.
Each permission has a unique name; the MIDP 2.0 permissions are:
javax.microedition.io.Connector.http
javax.microedition.io.Connector.socket
javax.microedition.io.Connector.https
javax.microedition.io.Connector.ssl
javax.microedition.io.Connector.datagram
javax.microedition.io.Connector.serversocket
javax.microedition.io.Connector.datagramreceiver
javax.microedition.io.Connector.comm
javax.microedition.io.PushRegistry
If you are using above APIs then your .Jar file must be signed with Proper Sign Certificates.
Check the article mentioned above for more detailed overview about Permissions.
You can buy such Certificate for example from Verisign.
Posting the solution in case it helps someone.
The problem was with the emulator. When I used J2ME SDK 3.0, with the DefaultCldcPhone1 it worked fine.

Restrict browser plugin usage to specific servers?

For a new banking application we are currently discussing the details of a browser plugin installed on client PCs for accessing smartcard readers.
A question that came up was: Is there a way to restrict the usage of this plugin to a specified list of domains? It should prevent any third-party-site to use the plugin just by serving some <embed/object>-Tag.
The solution should be basically browser-independent.
It may include cryptography if neccessary, but should only result in moderate implementation overhead in the plugin code.
Ideas, anyone?
I know there exists a MS solution called SiteLock, but that's only IE.
You could hard code the list of authorized domains into the plugin itself.
Alternatively, you could expose a web service which will deliver a list of authorized domains. The plugin could make a call to your web service when instantiated to determine whether it can be started or not.
We came up with this idea: (described for one server)
The plugin carries a public key A. The plugin creator issues a certificate to the server's public key B. The server starts the plugin within a HTML-page and provides these parameters:
several allication sepcific parameters
the certificate
a digital signature
Then the plugin will start and first of all perform these checks:
verify the certificate with the public key delivered within the plugin
verify the signature with the public key from the certificate
if verification was OK then proceed, else terminate.

Resources