What are the different Security modes available on Laravel 4 when using the Crypt::encrypt method. The Security documentation shows how to use the Crypt::setMode with ctr as the mode set. What are the other modes that can be used?
I did go through the source of the Encryption class and noticed that cbc is being set by default. Are there other modes that can be used?
According to the source code, it seems that you can use anything that PHP's mcrypt supports. There are two pretty extensive lists of Available Cyphers and
Available Modes.
Laravel 4.0.0 source code reference:
https://github.com/laravel/framework/blob/v4.0.0/src/Illuminate/Encryption/Encrypter.php#L79
https://github.com/laravel/framework/blob/v4.0.0/src/Illuminate/Encryption/Encrypter.php#L245-L259
Related
I saw some docs on Intel openvino website. And there is some docs about how to use only one NCS2, the performance is great. Now I have two NCS2, I want to test both of them on a platform, but there is no reference that how to use multi ncs2 to work on one task.
The OpenVINO™ toolkit (>=2019 R2) introduced a Multi-Device Plugin that automatically assigns inference requests to available devices in order to execute the requests in parallel. What this does is allow you to use the Multi-Device Plugin with multiple Intel® Neural Compute Stick 2 devices.
The benchmark_app in C++/Python is a good starting point to check how such plugin works. If you'd like to test drive this Multi-Device plugin, my recommendation would be to follow the article here as it contains a comprehensive and detailed walk-through of testing this feature on both Windows and Linux environments.
The typical "setup" of multi-device can be described in three major steps:
Configuration of each device as usual (e.g. via conventional
SetConfig method)
Loading of a network to the Multi-Device plugin created on top of
(prioritized) list of the configured devices.
Just like with any other ExecutableNetwork (resulted from
LoadNetwork) you just create as many requests as needed to saturate
the devices.
For this and more detailed information check Multi-Device Plugin documentation.
I would like to use OpenSSL for handling all our SSL communication (both client and server sides). We would like to use HW acceleration card for offloading the heavy cryptographic calculations.
We noticed that in the OpenSSL 'speed' test, there are direct calls to the cryptographic functions (e.g. RSA_sign/decrypt, etc.). In order to fully utilize the HW capacity, multiple threads were needed (up to 128 threads) which load the card with requests and make sure the HW card is never idle.
We would like to use the high level OpenSSL API for handling SSL connections (e.g. SSL_connect/read/write/accept), but this API doesn't expose the point where the actual cryptographic operation is done. For example, when calling SSL_connect, we are not aware of the point where the RSA operations are done, and we don't know in advance which calls will lead to heavy cryptographic calculations and refer only those to the accelerator.
Questions:
How can I use the high level API while still fully utilizing the HW accelerator? Should I use multiple threads?
Is there a 'standard' way of doing this? (implementation example)
(Answered in UPDATE) Are you familiar with Intel's asynchronous OpenSSL ? It seems that they were trying to solve this exact issue, but we cannot find the actual code or usage examples.
UPDATE
From Accelerating OpenSSL* Using Intel® QuickAssist Technology you can see, that Intel also mentions utilization of multiple threads/processes:
The standard release of OpenSSL is serial in nature, meaning it
handles one connection within one context. From the point of view of
cryptographic operations, the release is based on a synchronous/
blocking programming model. A major limitation is throughput can be
scaled higher only by adding more threads (i.e., processes) to take
advantage of core parallelization, but this will also increase context
management overhead.
The Intel's OpenSSL branch is finally found here.
More info can be found in pdf contained here.
It looks like Intel changed the way OpenSSL ENGINE works - it posts work to driver and immediately returns, while the corresponding result should be polled.
If you use other SSL accelerator, than corresponding OpenSSL ENGINE should be modified too.
According to Interpreting openssl speed output for rsa with multi option , -multi doesn't "parallelize" work or something, it just runs multiple benchmarks in parallel.
So, your HW card's load will be essentially limited by how much work is available at the moment (note that in industry in general, 80% planned capacity load is traditionally considered optimal in case of load spikes). Of course, running multiple server threads/processes will give you the same effect as multiple benchmarks.
OpenSSL supports multiple threads provided that you give it callbacks to lock shared data. For multiple processes, it warns about reusing data state inherited from parent.
That's it for scaling vertically. For scaling horizontally:
openssl supports asynchronous I/O through asynchronous BIOs
but, its elemental crypto operations and internal ENGINE calls are synchronous, and changing this would require a logic overhaul
private efforts to make them provide asynchronous operation have met severe criticism due to major design flaws
Intel announced some "Asynchronous OpenSSL" project (08.2014) to use with its hardware, but the linked white paper gives little details about its implementation and development state. One developer published some related code (10.2015), noting that it's "stable enough to get an overview".
As jww has mentioned in the comments, you should use the engine API to accomplish the task. There is an example in the above link on how to use that API. Usually, the hardware accelerator provider implements a library that is called an "ENGINE" this engine provides cryptographic acceleration and can be used by OpenSSL internally. Assuming that the accelerator you want to use has an ENGINE implemented(for example "cswitft") you should get the Engine by calling ENGINE *e = ENGINE_by_id("cswift"); and then initialize it ENGINE_init(e); and set it to be the default for the operations you want to use, for example ENGINE_set_default_RSA(e);
After calling these functions, you can use the high level API of OpenSSL (e.g. SSL_connect/read/write/accept)
We're running several apps against the same Memcached, so I'd like to configure different prefixes for all apps using Rack::Attack. By default, several apps would overwrite each others' cache.
I've seen the prefix accessor in Rack::Attack::Cache and there's even a low-level spec for it but there are no examples on how to use it.
According to the README and the introductory blogpost, I never have to deal with Rack::Attack::Cache but always with the higher-level Rack::Attack.
So, how can two or more apps use the same memcached for Rack::Attack without overwriting each others' cache keys?
Rack::Attack.cache.prefix = "custom-prefix"
Rack::Attack.cache is an instance of the Rack::Attack::Cache class.
I want to set up MongoDb on a single server, and I've been searching around to make sure I do it right. I have gleaned a few basics on security so far:
Enable authentication (http://www.mongodb.org/display/DOCS/Security+and+Authentication - not enabled by default?)
Only allow localhost connections
In PHP, be sure to cast GET and POST parameters to strings to avoid injection attacks (http://www.php.net/manual/en/mongo.security.php)
I've also picked up one thing about reliability.
You used to have use sharding on multiple boxes, but now you can just enable Journaling? (http://stackoverflow.com/questions/3487456/mongodb-are-reliability-issues-significant-still)
Is that the end of the story? Enable Authentication and Journaling and you are good to go a single server?
Thanks!
If you are running on a single server, then you should definitely have journaling enabled. On 2.0, this is the default for 64-bit builds; on 32-bit builds or older releases (the 1.8.x series) you can enable it with the --journal command-line flag or config file option. Be aware that using journaling will cause MongoDB to use double the memory it normally would, which is mostly an issue on 32-bit machines (since memory there is constrained to around 2GB ordinarily, with journaling it would be effectively halved).
Authentication can help, but the best security measures are to ensure that only machines you control can talk to MongoDB. You can do this with the --bind_ip command-line flag or config file option. You should also set up a firewall (iptables or similar) as an extra measure.
As for injection attacks, you should mostly be safe, so long as you don't blindly convert JSON (or similar structures) into PHP assocs and pass them directly to the MongoDB methods. If you construct the assoc yourself, by processing the $POST or $GET values, you should be safe.
As stated in the title, I would like to know if it's safe to develop a website using one of the actuals "omg" platforms that are Node.js and Ringo.js at their actual version.
Also, I would like to know if they support cookies/sessions and how do they deals with multi-fields post (fieldname[] in PHP).
Thank you
--Edit--
Thanks for all the links guys.
What can you tell me about Ringojs ?
Since I haven't figured which platform to start playing with. I must admit that the fact it can use Java seamlessly really impress me. The only available XSLT 2.0 library is in Java. I could use it as a templating system.
Is there anyone who had the chance to play with Ringojs?
From my experience using both, Ringo is more stable and "safer" for production use but you can comfortably deploy both. In addition to the ability to wrap existing Java libraries that you mention, you also get the benefit of being able to run it in an existing webapp container which manages the lifecycle of the application for you and ensures its availability.
That being said, it doesn't have to be an either or decision. By using my common-node package and assuming you don't use any Java libraries, it's perfectly feasible to maintain a project that runs on both without any changes to the code.
I've also included benchmarks that test the performance of Node.js vs. RingoJS the results of which you can find in the common-node/README.md. To summarize: RingoJS has slightly lower throughput than Node.js, but much lower variance in response times while using six times the RAM with default Java settings. The latter can be tweaked and brought down to as little as twice the memory usage of Node with e.g. my ringo-sunserver but at the expense of decreased performance.
Node.js is stable, so yes it's safe to use. Node.js is capable of handling cookies, sessions, and multiple fields but are not as easy to manage. Web frameworks solve this problem.
I recommend Express.js, it's an open-source web framework for Node.js which handles all of this and more.
You can download it here:
https://github.com/visionmedia/express
I hope this helped!
Examples of some of the bigger sites running Node.js
https://www.learnboost.com/
http://ge.tt/
https://gomockingbird.com/
https://secured.milewise.com/
http://voxer.com/
https://www.yammer.com/
http://cloud9ide.com/
http://beta.etherpad.org/
http://loggly.com/
http://wordsquared.com/
Yes. It is. https://github.com/joyent/node/wiki/Projects,-Applications,-and-Companies-Using-Node and https://github.com/joyent/node/wiki/modules
cookies/sessions/forms etc http://expressjs.com/ makes it easier
Ringojs is a framework developed by Hannes Wallnöver and uses rhino as it's scripting framework. There are webframeworks, templating-engines, orm-packages and many many more things already available. Have a look at the tutorial featuring a good subset of packages you may use for a simple web-application. It's not too long and straightforward.
Even thought some of those packages used within the tutorial (e.g. ringo-sqlstore]) are marked as 0.8 and come with the hint "consider this being beta" they are already very stable and bugs - if you find one - get fixed or commented on very fast.
And the power of uncountable java-libraries out there is at your fingertips - so if you already have java-knowledge this knowledge isn't wasted. Rhino - the scripting-engine - even enables you to implement interfaces and extend classes. It is possible a little more advanced but i've done it and i know of packages taking advantage of such features (like ringo-ftpserver which is a wrapper around Apache FtpServer written in java)
Another pro for me is - because ringojs is based on java - it works fairly well with multithreading with ringo/worker for example.