Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
4G-Lte consists of different layers protocol stack. I have Understand the graph of that stack. One thing i didn't get, are they are protocols or just a layers.If they are protocols, Can i get open Source code for each protocols in C.
I guess by "layers" you mean the PDCP,RLC,MAC,NAS,RRC that you see in the LTE user plane/control plane protocol stacks. Yes, they are protocols between a UE(User Equipment) and the LTE network (eNB, MME, etc), and there are protocol specifications defined for each one in 3GPP. For example, the RRC is defined in 36.331, RLC in 36.322.
I think for some simpler protocols like PDCP or RLC, you can find some open source codings. However, for more complicated protocols like RRC, NAS or MAC, I haven't seen open source.
Actually, LTE is a whole system (or stack). It is devide into different functions, we call it layer. The way layer designed we call it protocol. Protocol is mapped into layer.
SO. NAS, RRC, PDCP, RLC, MAC, PHY is both layer and protocol. Just as Alex Wang sayed, you can find protocol specifications in 3GPP.
And you can find open source code. BUT the quality is not so good.
As references:
http://www.openairinterface.org/
http://openlte.sourceforge.net/
The short answer is you will not obtain C open source code for the protocol stack. There are companies out there that sells you c code (for amazing amounts of money) but they were derived from the SDL diagrams derived from the specs ran through a casetool to generate the C code.
There are ways around this though, by converting the SDL sequences mentioned in the specs and implementing them in a sequential design on a functional programming environment like haskel or erlang. Actually this how manufacturers of network equipment does it.
A protocol stack is a set of protocol layers. The design is such that they are layers with protocols for inter-working between layers / network entities.
The challenge in finding such tools is because LTE standards are evolving very fast and hence it would substantial effort to maintain it inline with the changes for complicated layers.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm working on the design of a custom carrier board based on a Xilinx Ultrazed-EG SOM.
Specifically, the Carrier (embedding the SoM) should realize the PROFIBUS DP master node in the specific industrial network.
I'm so newbie in this field, nevertheless, my idea is to create the profibus software stack on the Xilix Ultrascale+ SoM, then to exploit a schematic similar to the one at page 90 of this document to connect the SoM to the DB9 connector.
For the sake of clarity, I attach the schematic below.
Specifically, my idea is to use a UART port to drive the TXR and RXD pins, while GPIOs for RTS and CTS pins.
What's your opinion about the above described architecture? Is it a practicable way to do this? Which pros and cons?
Thank you so much for your kindly answers. Sincerely.
I won't say what you intend to do is not possible but I will do say it would be a huge effort.
I'm not sure how familiar you are with Profibus. Unlike others like Modbus, for which you would find plenty of documentation and code to work with, and you could have a working solution within a couple of afternoons, to build your own Profibus stack from scratch would take quite a long time even for a team of experienced developers.
I have been looking at Profibus for a while and the only short way to have a working network quickly is to use Texas Instruments processors. You can take a look at the answer I wrote here. At the moment there is no free implementation of the stack for Linux, so you need to use TI RTOS. In their support forum, they have mentioned a couple of times they are working on a Linux port but at the moment you would have to pay for it (that should not be a problem if you are working on a commercial product, of course).
The hardware front would be the easy part. You should be able to replicate the circuit you posted from Siemens as long as your board supports 5V logic (I did not check). If, on the contrary, it works on 3.3V you only need to change the optocouplers. For a test or at-home environment, you can even drop the optocouplers altogether or just use a MAX485, which you can find ready to use on a PCB for less than a dollar.
Another quick and dirty way to interface with a network of Profibus slaves would be the obvious: buy a commercial off-the-shelf PLC to act as the Master and make your board talk to it. If you use the PLC as a Profibus to Modbus gateway, for instance, you could have a working solution within no time. You can even use something like this.
I hope my answer gives you some ideas. I'll be looking forward to your comments.
It is a clever choice to implement using FPGA.
However, you should also consider your requirements for time-to-market.
In FPGA approach for Profibus DP implementation, you must develop the whole Profibus DP stack or buy from some third part company (like Softing). This takes time, and for a serious solution, later you need PI certification (also costly). Also, a compatibility with some market configurator (software) for the network should be considered - or develop your own configurator.
In your hardware, I have some considerations:
I suggest you should use ISO1176(ti.com/product/ISO1176) instead 7ALS176SD. It is a modern approach and ISO1176 has very good electric characteristics.
Remember, regarding physical layer: PROFIBUS DP is a type of RS-485, but RS-485 is not PROFIBUS DP. So, not all RS-485 Transceivers are suitable for Profibus DP implementation.(https://www.youtube.com/watch?v=lxFeFx2A6dM).
Another approach is to use a embedded module from some company like Hilscher (https://www.hilscher.com/products/product-groups/embedded-modules/) or Anybus (https://www.anybus.com/products/embedded-index). There are also other companies, but these provide also a configurator compatilbe with embedded module (You will need to configure your network).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I'd like to model software components and their interaction between them, what information is passed, what processes take place in each component(not too detailed) and a clear specification of the input/output of the components.
What i've seen so far in UML is far too abstract and doesn't go into too much detail.
Any suggestions?
Someg guys Design programs on papers as diagrams,
Then pass them to software developer to Contruct.
This appraoach is tried: "Clever guys" do modeling, and pass models to "ordinary" developers to do laborious task. And this not worked.
We like analogies. So many times we make analogy to construction industry where some guys do models-bluprints and other do building-contruction.And we first think that UML or other models diagrams are equivalent to construction industry models-blueprints. But it seems that we are wrong.
To make an analogy with construction industry our blueprints are not
models-diagrams, our blueprints are actually the code we write.
Detailed Paper Models like Cooking Receipes
It is not realistic to design a software system entirely on a paper with detailed models upfront.Software development is iterative and incremental process.
Think of a map maker who make a paper map of city as big as city, since the modeler include every details without any abstraction level.Will it be usefull?
Is Modeling Useless ?
Definitely not. But you should apply it to difficult part of your problem-solution space, not every trival part of them.
So instead of giving every details of system on paper to developers, explore difficult part of problem-solution space with developers face to face using visual diagrams.
In software industry like it or hate it, Source Code is still the
King. And all models are liar until they are implemented and tested
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Could anyone explain what the difference between FIX and FAST? When should one use FIX, and when should one use FAST?
From an equities trading perspective, FAST is more widely used for market data dissemination, where message rates are much higher. FIX is the protocol of choice for interoperability between firms, and often internal systems as well, although different implementations can vary widely in the specific messages & attributes used.
Brokers and trading venues will generally offer order entry via some flavour of FIX, and offer a complementary native binary protocol for the most performance-sensitive clients or specialised features. The FIX interface is often just a wrapper around the native one, with an more limited set of message types and parameters.
A good example of this is the London Stock Exchange, with offers FIX 5.0 for order entry, along with their own low-latency native protocol. For market data they offer a combination of FAST and ITCH, although even using FAST, the full-depth market data feed isn't available to subscribers, and requires ITCH, as described here
FAST(FIX Adapted for STreaming) is FIX only, but customised to send across data more quickly, because of the huge increase in volume of data transferred in today's markets, as compared to normal FIX implementation. This should clarify a bit more.
FIX is text based protocol where all information encoded in tag=value format and delimited using special character:
'....35=X|55=EUR/USD...'
This mean that even decimal data will be send as text e.g. 1000000(which give you 7 bytes instead of 4 if code as binary).
FAST is solution to resolve this overhead. It based on concept of templates where described bytes order, sizes and meaning.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Does it make sense to focus on Domain Specifics Languages (DSL) development following a Software Product Line approach?
Does anyone know any other approach to create and maintain several related Domain Specifics Languages at the same time? Note that to support a custom language, requires support multiple tools, from parser, compilers, interpreters, to current state of art IDE, etc.
Our DMS Software Reengineering Toolkit is exactly this idea. DMS provides generic parsing, tree building, analyzes (name resolution, control flow anlaysis, data flow analyis, call graphs and points-to analysis, custom analyzers, arbitrary transformations). It has a variety of legacy language front ends, as well as some DSLs (e.g., HTML, XML, temporal logic equations, industrial controller languages, ...) off-the-shelf, but has very good support for defining other DSLs.
We use DMS both to build custom analyzers and transformation tools, but also as product-line generator. For example, we provide test coverage, profilers, smart differencers, and clone detections for a wide variety of languages... because DMS makes this possible. Yes, this lowers our dev and maintenance costs because each of these tool types uses DMS directly as a foundation. Fundamentally, it allows the amortization of language parsers, analyzers and transformers not only across specific languages, but across dialects of those languages and even different languages.
I think it makes sense to focus on DSLs following a Software Product Line approach. If you define the DSL correctly, it will essentially define a framework for creating applications within in a domain and an operating environment in which they execute in. By operating environment, I mean the OS, hardware, and database, as well as the code that implements the semantics or run time environment of the DSL. The framework and operating environment will be the artifacts that get reused across product lines. You may have to create an operating environment that consists of run time environments for multiple DSLs to support multiple product lines.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm looking for a diagramming program that can help in designing software, right from the high-level requirements down to the low-level classes and functions.
I've seen a lot of UML programs, but they don't let you design at multiple levels of detail in the same map, like if you could "zoom in" and design the details of a part.
Do programs exist that help in such designing? Programs that let you design at the high-level and low-level on the same map?
Most of the UML products from large vendors will let you do what you want. "Rational Rose" and "Enterprise Architect" are just two examples that I have used. They both let you mix component, package and class level information in the same view. Both of them provide a way to specify requirements as part of the meta-data to a class and I believe, functions too.
Edit 8/23/09
I just found Topcased. It's free and does many of the same things as Rose & EA. I'm not sure about mixing different diagram types in one view, but you might want to give it a shot. I'm definitely going to investigate using it for my personal projects.
I recommend BOUML. It's a free UML modelling application, which:
has a great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
this can work as the "zoom" feature you're asking (I use such SVG exports my self, to be able to quickly overview relation of group of classes, and then zoom in into details of selected one),
is extremely fast (fastest UML tool ever created, check out benchmarks),
has rock solid C++, Java, PHP and others import support,
is multiplatform (Linux, Windows, other OSes),
is full featured, impressively intensively developed (look at development history, it's hard to believe that such fast progress is possible).
supports plugins, has modular architecture (this allows user contributions, looks like BOUML community is forming up)
The "zoom" feature you're asking can be obtained through SVG export. I use such exports my self in the way you're asking.
I've used Rational Rose and looks like it fits your needs.
You could try BOUML which, although it doesn't allow you to "zoom in", does cover all the aspects of UML, and allows you to view different parts of the design at once (in multiple windows). It is also free, which may or may not make it more desirable for you, and is quite cross platform.
First of all there are different diagrams for different things you want to express. During software design you dont only use UML, but also HTML sketches and things like that. So choose the right tool for the right task is my advice. Create a folder structure depending on your granularity, one for Frontend sketches (you can place it hiracially), one for class diagrams and so on. So try to establish a process that fullfils all your needs. Often the holy grail programm doenst exist or is not good, just because of the fact that it tries to satisfy to many customers.