Is that the concept of Node.js event loop the same as CICS pseudo-conversational programming? - node.js

I am asking this question from an architectural point of view. I have been looking up tutorials and blog posts related to Node.js. Apart from a server-side implementation of Javascript, I don't see anything new when compared to the basic concepts used in CICS since the 1970s.
I must admit that the implementation and other technical details are different (PC vs Mainframe, Scripting language vs COBOL, UNIX vs MVS). However, other than those, I don't see any difference.
Can someone offer some insights from the architectural view?

The purpose of CICS psuedo-conversational programming is to release common resources while the user is filling out the screen.
Node.js keeps a single thread for your code while all input / output runs in parallel with your code.
With CICS, the developer has to code in a certain way (psuedo-conversational) so that the shared CICS system would run efficiently. With node.js, the design lets you code without worrying about the underlying architecture.
I'd say that the concepts are different. The developer serves CICS, while node.js serves the developer. It's like the difference between a dictatorship and a facilitator.

Actually they are quite similar in many ways. There are, several important differences in their implementations. Similarities first... both are examples of a monitor style of programming, both react to events in a more or less message passing style and both are designed to keep from blocking on allocated resources. Both also work very well with message passing middleware. CICS code can even be structurally similar (if you ignore the large and mostly mysterious number of constants and bizarre function names). there are also some profound differences, particularly with regards to transactionality, built in security ease of management. While CICS has GUI management, it is a long way from the simplicity of Node. I believe Node is now available natively on mainframes as well.
I realize this is an old question, but thought it deserved an update. The short answer is they are not the same, but that CICS can support a model very similar to node.
Ps... i have written code for both. In some ways CICS seems more friendly in C and Java than Cobol, which is what most people are familliar with. The respondent above is also right in that they do not serve exactly the same purpose although they can be used similarly. Node seems much easier to code for, but requires a lot of libraries and/or external components if you need some of the features that CICS provides out of the box.

Related

A real world project using microservices architecture

Anyone knows an open source project that is on microservices architecture? I need a more real app that has addressed cross-cutting concerns,etc not just an educational sample.
Please introduce if you know any. Especially if it's on Node.js or C#.net stack.
Thanks
As far as I can tell, there are little to no open source projects out there using this pattern!
There are however many great framworks/toolchains to help you implement it:
If you like GO then you're gonna like Go-Kit
If you like C# you might have a look at servicestack.net
as for other languages/toolchains I'm not very well informed but there are many frameworks out there that can help you building microservices in almost every major language.
That said: The main reason for this lack of open source micro service systems is probably that most open source project keep a very narrow track of use cases that they cover to stay generic and reusable. This stands in contrast to what commercial backend systems have to provide - which is myriad of usually very specific business services which will probably never be public since they contain the companies competitive advantages and business critical knowledge! Most large exponents of the micro service pattern (like Netflix, Spotify and SoundCloud) have however open sourced the tools and frameworks they build/used to get the services orchestrated, coordinated, synchronized, health checked, list, balanced, scaled etc...
To give you some general pointer towards micro services in general Martin Fowler has some great resources. Also a good reference are the talks of Peter Bourgon on Micro services.

What would be the disadvantages of building a website purely in Eiffel using EWF (Eiffel Web Framework)?

We are looking to build a website on top of an existing Eiffel business-tier core, which is sitting over a MS SQL Server database. I am presently considering the advantages and disadvantages of writing the web and mobile tiers either purely in Eiffel, purely in typical web-stacks, or some hybrid.
For us, there are clear advantages to pure Eiffel, not the least of which are:
Inheritance and other language notation mechanisms not found in other languages.
The compiler cannot see into code from other languages, so we are at the same disadvantage one we cross out of Eiffel into something else.
Auto-Test is something we heavily rely on in our Eiffel code, which takes clear advantage of Design by Contract. In other languages, we lose this power and are left with TDD (e.g. their version of Auto-Test in Eiffel).
We now have to learn more than: Eiffel, HTML-5, CSS-3, JS, and whatever JS framework(s) we use.
Every new language and tool adds more complexity to the project.
Eiffel programs are compiled to C --> EXEs, which are far faster than their scripted and interpreted counterparts.
I think there are also some clear advantages to existing, non-Eiffel languages as well:
Existing frameworks and tools can develop simple to moderate web sites and mobile applications rather quickly.
Existing "best-practices" are not terrible and producing reasonably reliable and maintainable code.
I am not sure what all of the advantages and disadvantages are, so I am asking. However, at the end of the day: Our core business suite is pure Eiffel. That will never change.
Thanks in advance for the feedback!
Here is what I can say from my own experience (I have create several web applications in different frameworks including one in Eiffel). First, the Eiffel Web Framework is quite usable right now. The advantage of other frameworks are their features. Here is a list of the major problems I encounter when I created my web application with Eiffel:
I had to create the MVC design myself (other frameworks like Django, Rails or Laravel does that automatically).
Eiffel lack is a good templating system. The Smarty library is ok, but it really lack some really good template features that other has. Also, trying to work with UTF-8 file in Smarty can be quite difficult (this has been a pain for me).
I had to do some session management based on cookies because the one in Eiffel Web Framework was quite primitive.
The release process (removing Nino) was not easy and lack good documentation (I was using Apache, I don't know about IIS)
That's it, other than that, every thing went quite smoothly.
The next list of disadvantages is from my naïve point of view:
The EWF package is not finished, it's going to have more nice capabilities in the future, therefore you may need to follow the new development to take advantage of new functionality.
Eiffel compiler makes it impossible to update a web program on the fly, it needs to be recompiled and redeployed.
If the program is going to be multithreaded, you need to learn a structured way to deal with concurrency based on the SCOOP model.
Some tools (e.g., XSLT processors) are not readily integrated into EWF, you may need to do this yourself.
The current EWF API is rather low-level, so before higher-level frameworks built on top of EWF become widespread, you may need to do more low-level programming than expected (by low-level I mostly mean the way to generate HTML/XML/or some other format your web service is going to produce).
Having to use just one language to do both application logic and HTML generation, that allows for easy debugging, may lower the requirements for the developers and their skills, that may affect your business model.
There are several tools that address specific needs like wiki, simple web-page creation, authorization, etc., but you may need to enhance them to get richer functionality as well as to design the architecture of your software, because some idioms and usage patterns are not established yet.

WPP tracing for linux

I'm looking for a way to output traces to a log file in my code, which runs on linux.
I don't want to include the printing information in the binary, in every place I deploy it.
It windows, I simply used WPP to trace without putting the actual traces strings in my binary.
How can this by achieved in Linux?
I'm not very familiar with Linux tools in this area, so maybe there is a better system. However, since nobody else has made any good suggestions, I'll make a suggestion. (Probably not a very good suggestion, but the best I can think of right now.)
In theory, you could continue to use wpp. Wpp is simply a template system. It scans the configuration and input files to create data structures. Then it runs a template, fills in the data values it got from the scan, producing the tmh files. You could create a new set of templates that would use Linux apis instead of Windows apis, and would record the message strings in a way that works with some other log decoder system.
I noticed this question only now and would like to add my two cents to the story just for a case. Personally, I truly appreciate Windows WPP Tracing and consider it probably the best engineering solution for practical development troubleshooting among similar tools.
It happened I extended WPP use to Unix-like platforms twice. We wanted to use strong sides of WPP concept in general and yet use it in a multi-platform pieces of code. This was not a porting but rather a wrapper to specific WPP use we configured on Windows. One time we had a web service to perform actual WPP pre-processing on Windows; it may sound a bit insane but it worked fine and effective within the local network. A wrapper script that was executed before each compilation sent a web request, got a processed file and post-processed the generated include file to make it suitable for Unix-like platforms. The second time we implemented a simplified WPP pre-processor of our own (we found yet additional use for it - we could generate the tracing statements differently for production and unit testing, for example). This was a harsh solution: you anyway need to use some physical tracing framework behind the wrapper on non-Windows platform (well, the first time we apparently implemented our own lower level).
I do not think the Linux world has a framework comparable to WPP. Once I even thought it could be a great idea to make an open source porting project for WPP. I am not sure it would be much requested though. I said it is a great engineering solution. But who wants to do dirty engineering work? Open source community prefer abstract object-oriented and generic solutions, streaming and less necessity in corresponding tools (WPP requires special management tools and OS support).Ease of code writing is the today's choice.
There could be Microsoft fault (or unwillingness) in the lack of WPP popularity too. They kept it as an internal framework that came out just by a case with Windows DDK because they have to offer some logging/tracing solution for driver developers. Nobody even noticed much that WPP is well suitable for the user-space code too. And WPP pre-processor for C#, for example, has never been exposed to public at all.
Nevertheless, I still think that WPP porting to Unix/Linux work can be a challenging, interesting and maybe even useful attempt. If someone decides to lead it. :)

Is it possible to make a Squeak VM embedded in C without any plugins?

I want to use Smalltalk as an embedded DSL engine in C. No plugins required, and whole custom environment will be made by me. So almost ObjectEngine will be remained. Is this possible? I'm currently trying, however any help will be appreciated.
-- edit --
Any of Smalltalk implementation embedding guidance will also be appreciated. (except GNU Smalltalk. Because of it's license...)
This is a difficult thing to do with Pharo/Squeak:
The object engine depends on many primitives and thus many plugins need to be present. A while back most plugins couldn't be compiled statically. I don't know if this is still the case?
Building a whole custom environment is tricky, because it most likely means to strip down an existing image. There are various projects that try to build the infrastructure to bootstrap new images, but I haven't seen working solutions yet.
As Davorin writes Dolphin Smalltalk can be deployed as DLL. Similarly this is possible with Cincom Smalltalk and Smalltalk/X. All these Smalltalk's are commercial though.
To summarize, you are probably better off looking at Lua or Python that have been applied in your context many times already.
There was once a proposal for GSoC but never done:
Packaging Squeak as a DLL
A conventional approach to making libraries written in a particular language available to other languages is to package a library as a dynamic load library or shared object (dll from here on in). Adapting that approach to Squeak would both allow use of Smalltalk code by a wider audience and enable alternative deployment approaches for Squeak applications, easing the creation of Squeak plug-ins for systems like Apache, web browsers and so on. There are broadly two different approaches one can take, which one could call passive or active. In the passive architecture, the Squeak dll is inactive until called from another language, and runs only until a result is answered to the caller. In the active architecture the act of loading the dll causes Squeak to start up on its own thread and accept incoming calls from other threads in a form of rendezvous. The passive approach is easier to build but less useful; one does not have the full range of Squeak facilities such as light-weight processes, delays etc.
The objective of the project would be to implement either the passive or the active approach, depending on the student's interest and ability. The goal is to make Squeak more broadly useful to users and application deployers alike. There are many technical challenges to be met that will involve both Smalltalk and C coding and the use of the Smalltalk-C hybrid language Slang in which the Squeak VM is written.
The benefits to the student include gaining an in-depth understanding of dlls, interfacing to dynamic languages, foreign function interfaces and of the Squeak VM. The student will also be gaining an understanding of architectural issues by considering the many trade-offs between the passive and active approaches.
The benefits to the Squeak community will be in being able to package and deploy Squeak applications much more broadly than before.
Dolphin Smalltalk from Object Arts can be deployed as dll. But you would need to check the license for your particular use case, and it is windows only.

Which programming language suits web critical application development?

According to this page,it seems that Perl,PHP,Python is 50 times slower than C/C++/Java.
Thus,I think Perl,PHP,Python could not handle critical application(such as >100 million user,>xx million request every second) well.But exceptions are exist,e.g. facebook(it is said facebook is written with PHP entirely),wikipeida.Moreover,I heard google use Python extensively.
So why?Is it the faster hardware fill the big speed gap between C/C++/Java and Perl/PHP/Python?
thanks.
Computational code is the least of my concerns in most heavy usage web applications.
The bottle necks in a typical high availablility web application are (not nessecarility in this order, but most likely):
Database (IO and CPU)
File IO
Network Bandwidth
Memory on the Application Server
Your Java / C++ / PHP / Python code
Your main concerns to make your application scalable are:
Reduce access to the database (caching, with clustering in mind, smart quering)
Distribute your application (clustering)
Eliminate useless synchronization locks between threads (see commons-pool 1.3)
Create the correct DB indexes, data model, and replication to support many users
Reduce the size of your responses, using incremental updates (AJAX)
Only after all of the above are implemented, optimize your code
Please feel free to add more to the list if I missed something
The page you are linking only tells half the truth. Of course native languages are faster than dynamic ones, but this is critical to applications with high computing requirements. For most web applications this is not so important. A web request is usually served fast. It is more important to have an efficient framework, that manages resources properly and starts new threads to serve requests quickly. Also the timing behaviour is not the only critical aspect. Reliable and error-free applications are probably better achieved with dynamic languages.
And no, faster hardware isn't a solution. In fact Google is famous for using a cluster of inexpensive machines.
(such as >100 million user,>xx million request every second)
To achieve that sort of performance, you are going to HAVE to design and implement the web site / application as a scalable multi-tier system with replication across (probably) all tiers. At this point, the fact that one programming language is faster / slower than another probably only affects the number of machines you need in your processor farm. The design of the system architecture is far more significant.
there is no JIT compiler in php which Compile the code into machine code
Another big reason is PHP's dynamic typing. A dynamically typed language is always going to be slower..
click below and read more
What makes PHP slower than Java or C#?
C is easily the fastest language out there. Its so fast we write other languages in it. Nobody seriously writes web sites in C. Why? Its very easy to screw up in C in ways that are very difficult to detect and it does almost nothing to help you. In short, it eats programmers and generates bugs.
Building a robust, fast application is not about picking the fastest langauge, its about A) maintainability and B) scalability.
Maintainability means it doesn't have a lot of bugs. It means you can quickly add new features and modify existing ones. You want a language that does as much of the work as possible for you and doesn't get in the way. This is why things like Perl, Python, PHP and Ruby are so popular. They were all written with the programmer's convenience in mind over raw performance or tidiness. C was written for raw performance. Java was written for conceptual tidiness.
Scalability means you can go from 10 users to 10,000 users without rewriting the whole thing. That used to mean you wrote the tightest code you can manage, but highly optimized code is usually hard to maintain code. It usually means doing things for the benefit of the computer, not the human and the business. That sacrifices maintainability and you have to tell your boss its going to take 3 months to add a new feature.
Scalability these days is mostly achieved by throwing hardware at it and parallelizing. How many processes and processors and machines can you farm your work out to? If you can achieve that, you can just fire up another cheap cloud computer as you need it. Of course you're going to want to optimize some, but at this scale you get so much more out of implementing a better algorithm than tightening up your code.
For example, I took a sluggish PHP app that was struggling to handle 50 users at a time, switched from Apache with mod_php to lighttpd with load balanced, remote FastCGI processes allowing parallelization with a minimum of code change. Some basic profiling revealed that the PHP framework they used to prototype was dog slow, so it was stripped out. Profiling also suggested a few indexes to make the database queries run faster. End result was a system that could handle thousands of users and more capacity could be added as needed while leaving most of the code implementing the business logic untouched. Took a few weeks, and I don't really know PHP well.
It may be beneficial to reimplement small, sharp pieces in a very fast language, but usually that's already been done for you in the form of an optimized library or tool. For example, your web server. For the complexity and ever-changing needs of business logic the important thing is ease of maintenance and how good your programmers are.
You will find that most of the web is written in PHP, Perl and Python because they are easy to write in, with small, sharp bits written in things like C, Java and exotics like Scala (for example, Twitter). Wikia, for example, is a modified Mediawiki which is written in PHP but it is performant (amongst other reasons) by doing a heroic amount of caching.
Google is using Python for GAE and Windows Azure is providing PHP. The LAMP architecture is a great for application scalabilty.
I also think that the programming language is not that important regarding performance. The most important thing is to look at the architecture of your app.
I hope it helps
To serve a web page, you need to:
Receive and parse the request.
Decide what you wish to do with the request.
Read/write persistent data (database, cache, file system)
Output HTML data.
The "speed" of the server side language only applies to steps two and four. Given that most scripts strive to keep step 2 as short as possible, and that most web languages (including PHP) optimize step 4 as much as they can, in any serious web site most of the request processing time will be spent in step 3.
And the time spent on step 3 is independent of the server-side language you use ... unless you implement your own database and distributed cache.
For php, there are lot of things you can do to increase performance. For example
Php Accelerator
Caching Queries
Optimize Queries
Using a profiler to find slower parts and optimize
These things would certainly help reduce the gap between lower level languages. So to answer your question there are other things you can do inside the code to optimize it and make it run faster
I agree with luc. Its the architecture that really matter and not the programming language.

Resources