I can streami '~/test/test.mp4' file while set secure token is 'Do NOT use SecureToken'.
But I can't stream '~/test/test.mp4' file while secure token is 'Protect all protocols using hash (SecureToken version 2)'.
Also, I can stream '~/test.mp4' file while secure token is 'Protect all protocols using hash (SecureToken version 2)'.
Example:
Do NOT use SecureToken
success
rtmp://example.com:1935/vod/_definst_/mp4:test/test.mp4
success
rtmp://example.com:1935/vod/mp4:test.mp4
Protect all protocols using hash (SecureToken version 2)
fail
rtmp://jungslab.com:1935/vod/_definst_/mp4:test/test.mp4?wowzatokenendtime=1461729940&wowzatokenstarttime=1461719140&wowzatokenhash=nB0hdUG-U60WAQ-wV5lIRD0e3tbCCXk3tBWrLXxb90M=
success
rtmp://example.com:1935/vod/mp4:test.mp4?wowzatokenendtime=1461729868&wowzatokenstarttime=1461719068&wowzatokenhash=KpioKfCCQQoeVT4lwLUnwC2xhDG-HOS2kRtAx5PEHhY=
How access a subdirectory file in wowza that uses secure token.
The problem with Wowza here seems to be with parsing query string. If you can't change the content directory in you vod/Application.xml (or you don't want to do so) to the test subdirectory (or any other mount) you may try moving the query string right after the instance specification app/(definst?qs=/file) or use plugin to obey the directory structure completely.
Address to try for your file could be:
rtmp://jungslab.com:1935/vod/_definst_?wowzatokenendtime=1461729940&wowzatokenstarttime=1461719140&wowzatokenhash=nB0hdUG-U60WAQ-wV5lIRD0e3tbCCXk3tBWrLXxb90M=/mp4:test/test.mp4
(adjust your token info for valid)
Depending on the version/build that you have, there was a previous bug found (on 4.3.0.01 and earlier) where subdirectories were not parsed correctly with Secure Token enabled. You should try:
rtmp://jungslab.com:1935/vod/mp4:_definst_/test/test.mp4?wowzatokenendtime=1461729940&wowzatokenstarttime=1461719140&wowzatokenhash=nB0hdUG-U60WAQ-wV5lIRD0e3tbCCXk3tBWrLXxb90M=
You will need to re-generate your hash since your stream path has changed.
Alternatively, you can install the latest build from Wowza, since the fix should be in the latest available patch.
As a troubleshooting tool, you can add the Boolean properties securityDebugLogRejections and securityDebugLogDetails to your conf/appName/Application.xml file to output additional debug information to your logs/wowzastreamingengine_access.log file. In particular, you can see what string the server is using to generate the hash, and why the received hash was rejected.
Related
I'm using node:crypto API, namely createCipheriv() and createDecipheriv() with aes-256-gcm cipher to encode/decode a stream of data. However, it looks like I need to call decipher.setAuthTag() in order to decode the stream correctly, otherwise it throws an authentication error in the end (however the data is decoded correctly).
Is there a way to avoid using the authentication checks with this cipher? I'm using streams of data and it's very inconvenient to store auth tag with the data (I'm using multiple storage options, one of which is a plain filesystem). The data consistency can be checked by other means.
Or maybe you could recommend a universal auth tag storage option that I could use with streams (which doesn't require random access and rewind)?
You need to use the auth tag if you are using GCM. GCM is CTR with authentication built in, you could look into using a CTR cipher Node Forge has this option.
Alternatively, you could append your tag to your ciphertext and store everything together? Not sure what the details of your storage system are that make that inconvenient.
I'm building a custom command line tool using node. The user will need to be able to sign in and persist their session. I have done this using node and passport before for a web app using localStorage, but how should I go about storing the users JWT with a cli tool.
If it's an OAuth2 or OIDC access_token then even though it's a JWT you should treat it as an opaque blob because OAuth2 and OIDC clients are not the intended audience for access_token (they're meant to pass it as-is verbatim to the remote Protected Resource).
I note that OAuth2 and ODIC allow access_token to be anything - including non-JWT tokens, such as a short opaque "reference token" value.
This means that you can write the JWT (in its Base64-encoded format) directly to a file on-disk and read it back as you like. Because it's Base64 you don't need to worry about file encoding too much (e.g. both 7-bit ASCII vs UTF-8 are fine).
If it's an ODIC id_token then you could Base64-decode it and store the decoded raw JSON in a file if you intend to use each individual Claim stored within in your client. Note that if you do store the raw JSON to a file you must use UTF-8 unless you want difficulties down the line.
Each platform has a preferred location for per-user temporary data:
On Windows, you should store this in a subdirectory of %LOCALAPPDATA% (C:\Users\me\AppData\Local), e.g. %LOCALAPPDATA%\YourCompany\YourProduct\Jwt.json.
If security is important you should encrypt this file at-rest using DPAPI: DPAPI encrypts files using a secret key that's part of the user's profile - you just pass the cleartext into the Win32 function and it returns the encrypted ciphertext which you then write to disk. Make sure you're careful about the binary encoding of any text you read and write, of course. DPAPI can be used on a per-user (roaming between machines) or per-machine (multiple users, but only on the same machine) basis.
Windows also has the Credential Manager API, but it's not well-suited to storing large blobs: https://learn.microsoft.com/en-us/windows/desktop/secauthn/kinds-of-credentials
On macOS you'll want to use the Keychain API: https://developer.apple.com/documentation/security/keychain_services
On Linux, there is no system-provided secret-storage mechanism ( https://dzone.com/articles/storing-secrets-in-linux ), but most approaches seem to write secrets to disk and then set chmod on the file to prevent access by other users. You could also encrypt the file with a custom password that the user must enter whenever your program runs.
As with Windows, you should still save this data under the user's home directory (~/) and not the shared /tmp directory. The convention on Linux for application-specific data is to use a hidden (dot-prefixed) home subdirectory, e.g. ~/.yourCompany/yourProduct or just ~/.yourProduct.
I'm developing an application using Chrome Native Messaging that starts through a Chrome Extension.
My question is: How can I ensure that host application is really the same supplied by me?
I need to ensure the authenticity the application called by extension. How do I get it if I donĀ“t have permission to read registry or check if something was changed?
That is an excellent question, and my guess is the answer is "unfortunately, you can't".
It would be interesting to implement some sort of cryptographic hash like the ones Chrome uses to verify extension files, but that's not a very strong guarantee.
Consider (all of this hypothetical):
You can secure the registry entry / manifest pretty easily this way, but what about the file itself?
Suppose you pin a hash of the executable, then it becomes painful to update it (you'll have to update the extension too in sync). Can be resolved with some kind of public key signature though instead of a hash.
Suppose you pin the executable in the manifest. What about its data files? More importantly, what about the libraries a native app uses?
Securing a Chrome extension/app is easy, since the only "library"/runtime you rely on is Chrome itself (and you put trust into that). A native app can depend on many, many things on the system (like the already mentioned libraries), how do you keep track?
Anyway, this seems like an interesting thing to brainstorm. Take a look the Chrome bug tracker if there is already anything similar, if not - try to raise a feature request. Maybe try some Chromium-related mailing list to ask the devs.
I realize this is an older post, but I thought it would be worth sharing the Chromium team's official response from the bug I filed: https://bugs.chromium.org/p/chromium/issues/detail?id=514936
An attacker who can modify registry or the FS on the user's machine can also modify the chrome binary, and so any type of validation implemented in chrome can be disabled by such attacker by mangling with the chrome's code. For that reason chrome has to trust FS (and anything that comes from local machine).
If i understood the question correctly,The solution could be
Register your executable with your server while installing along with signing the executable and store your register number inside the executable and server
In Each Request (postMessage) from extension ,send a token in addition which was given by your server
Ask the server for the Next token to send response to the extension by passing the token from extension along with you registry number
Server will respond with the token if you are a registered user
Encrypt it with your registry number and send it to extension along with the token from extension
extension holder browser will ask the server its a good response
With the help of extension token the server will identify the executable registry number and decrypt the executable token and verify which was generated by us(server) for the extension token
Once server confirmed ,Browser will consider it as a response
To be important your registry number should be secure and the client machine cannot able to get it out from the executable(Using proper signing it can be achievable)
As chrome stopped support for Applet ,I implemented the same for smart card reader in chrome
The only loop hole is,The client machine can able to trace each and every request its sending with the help of some tools
If you are able to make your executable communication with your server be secure using some httpOnly Cookie(Client machine cannot able to read) or else the password mechanism ,Most probably a secure solution you can achieve
I am writing an auto update client. It's a very simple app that:
1) Checks a central server to see if an update exists for some application
2) Downloads the install program from the server if a newer version exists
3) Runs the setup program
Other than server-side concerns (like someone hacking our site and placing a 'newer' malicious application there), what client-side security concerns must I take into account when implementing this?
My current ideas are:
1) Checksum. Include the checksum in the .xml file and check that against the downloaded file. (Pre or post encryption?)
2) Encrypt the file. Encrypt the file with some private key, and let this program decrypt it using the public key.
Are both or either of these necessary and sufficient? Is there anything else I need to consider?
Please remember this is only for concerns on the CLIENT-SIDE. I have almost no control over the server itself.
If you retrieve all of the information over https and check for a valid certificate then you can be sure that the data is coming from you server.
The checksums are only as strong as the site from which they're downloaded.
If you use an asymmetric signature, so that the auto-update client has the public key, then you can sign your updates instead, and it won't matter if someone hacks your website, as long as they don't get the private key.
If I can compromise the server that delivers the patch, and the checksum is on the same server, then I can compromise the checksum.
Encrypting the patch is mainly useful if you do not use SSL to deliver the file.
The user that executes a program is usually not authorized to write to the installation directory (for security reasons; this applies to desktop applications as well as e.g. PHP scripts on a web server). You will have to take that into account when figuring out a way how to install the patch.
Introduction
I want to create a Java web application for storing and backing up user files, similar to Dropbox. One of the interesting Dropbox feature is that it can detect whether a certain file already exists on server. For example, if one user upload a file onto server, another user who tries to upload the same file will not need to upload the same file content. Server will only need mark that he has the same file. This helps to save the bandwidth/space and increases the speed in many ways.
The most basic solution to this problem is to use a file hash string, e.g. sha1, md5, etc., to identify the file. The client software check whether a certain hash exists on server or not. If it exists, then it can skip the uploading process and mark that user has the same file.
Problem
The web application is implemented based on REST architecture so that user can easily write their own client software to upload their files. For security reasons, the SSL is enabled for all transactions. But my most security concern is about users faking that they have a file without actually owning it if I use sha1 or any other standard hash alogorithms. This cannot be prevented by SSL or encryption. If a user manage to get the hash string, e.g. md5 and sha1 of many files can be found by googling, he can mark that he has the file using REST service on the web application.
So one of the possible solution is that the server requests a set of certain random bytes from the file as well as the hash of the whole file. Here is example steps:
Client checks whether a certain hash exists on server or not. Then, server returns the required positions of random bytes if the file already exists.
Client sends random bytes as per request if the server has the file. Client software will not be able to response to it without having the actual file.
In this way, it can save the bandwidth as well as ensure that user owns the file they want to upload.
Question
I am no expert in Security over the web so I have no idea whether this is a good idea or not. I have read some articles about implementing their own fancy process might lead to the reduction in security strength because the security cannot be tested and the extra information may provide a cracking method.
Does anyone has any comment on the process?
Will it reduce the sucurity?
Does anyone have an idea to solve this problem differently?
I understand that there might not be an exactly answer to this question but I would like to hear if anyone has encounter the same problem and has any good solution to it.
Rather than asking the client to upload some random bytes of the file's contents, it may be better to ask the client to upload the hash of a random region the file. That way you can use a wider range of sizes that you ask the client to verify.
Better yet, though, may be to send the client a random number and require the client to compute an HMAC of the entire file's contents using that number as the key. This is more computationally-expensive since the server must compute the HMAC too, but it verifies that the client has the entire file, not just a small portion of it.
One unavoidable side effect of this hash feature, even with a verification scheme, is that it reveals that a copy of the file already exists somewhere on the server. That by itself may be sensitive information.
For the most stringent privacy protection, you should forego this feature and make each user upload their own copy of the file. You can use hash comparison on the server to avoid storing multiple copies of the file, transparently to the clients.