PROFIBUS Architecture for Ultrascale +: experts' opinion request [closed] - protocols

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm working on the design of a custom carrier board based on a Xilinx Ultrazed-EG SOM.
Specifically, the Carrier (embedding the SoM) should realize the PROFIBUS DP master node in the specific industrial network.
I'm so newbie in this field, nevertheless, my idea is to create the profibus software stack on the Xilix Ultrascale+ SoM, then to exploit a schematic similar to the one at page 90 of this document to connect the SoM to the DB9 connector.
For the sake of clarity, I attach the schematic below.
Specifically, my idea is to use a UART port to drive the TXR and RXD pins, while GPIOs for RTS and CTS pins.
What's your opinion about the above described architecture? Is it a practicable way to do this? Which pros and cons?
Thank you so much for your kindly answers. Sincerely.

I won't say what you intend to do is not possible but I will do say it would be a huge effort.
I'm not sure how familiar you are with Profibus. Unlike others like Modbus, for which you would find plenty of documentation and code to work with, and you could have a working solution within a couple of afternoons, to build your own Profibus stack from scratch would take quite a long time even for a team of experienced developers.
I have been looking at Profibus for a while and the only short way to have a working network quickly is to use Texas Instruments processors. You can take a look at the answer I wrote here. At the moment there is no free implementation of the stack for Linux, so you need to use TI RTOS. In their support forum, they have mentioned a couple of times they are working on a Linux port but at the moment you would have to pay for it (that should not be a problem if you are working on a commercial product, of course).
The hardware front would be the easy part. You should be able to replicate the circuit you posted from Siemens as long as your board supports 5V logic (I did not check). If, on the contrary, it works on 3.3V you only need to change the optocouplers. For a test or at-home environment, you can even drop the optocouplers altogether or just use a MAX485, which you can find ready to use on a PCB for less than a dollar.
Another quick and dirty way to interface with a network of Profibus slaves would be the obvious: buy a commercial off-the-shelf PLC to act as the Master and make your board talk to it. If you use the PLC as a Profibus to Modbus gateway, for instance, you could have a working solution within no time. You can even use something like this.
I hope my answer gives you some ideas. I'll be looking forward to your comments.

It is a clever choice to implement using FPGA.
However, you should also consider your requirements for time-to-market.
In FPGA approach for Profibus DP implementation, you must develop the whole Profibus DP stack or buy from some third part company (like Softing). This takes time, and for a serious solution, later you need PI certification (also costly). Also, a compatibility with some market configurator (software) for the network should be considered - or develop your own configurator.
In your hardware, I have some considerations:
I suggest you should use ISO1176(ti.com/product/ISO1176) instead 7ALS176SD. It is a modern approach and ISO1176 has very good electric characteristics.
Remember, regarding physical layer: PROFIBUS DP is a type of RS-485, but RS-485 is not PROFIBUS DP. So, not all RS-485 Transceivers are suitable for Profibus DP implementation.(https://www.youtube.com/watch?v=lxFeFx2A6dM).
Another approach is to use a embedded module from some company like Hilscher (https://www.hilscher.com/products/product-groups/embedded-modules/) or Anybus (https://www.anybus.com/products/embedded-index). There are also other companies, but these provide also a configurator compatilbe with embedded module (You will need to configure your network).

Related

In embedded design, what is the actual overhead of using a linux os vs programming directly against the cpu? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I understand that the answer to this question, like most, is "it depends", but what I am looking for is not so much an answer as much as a rationale for the different things affecting the decision.
My use case is that I have an ARM Cortex A8 (TI AM335x) running an embedded device. My options are to use some embedded linux to take advantage of some prebuilt drivers and other things to make development faster, but my biggest concern for this project is the speed of the device. Memory and disk space are not much of a concern. I think it is a safe assumption that programming directly against the mpu and not using a full OS would certainly make the application faster, but gaining a 1 or 2 percent speedup is not worth the extra development time.
I imagine that the largest slowdowns are going to come from the kernel context switching and memory mapping but I do not have the knowledge to correctly assess or gauge the extent of those slowdowns. Any guidance would be greatly appreciated!
Your concerns are reasonable. Going bare metal can/will improve performance but it may only be a few percent improvement..."it depends".
Going bare metal for something that has fully functional drivers in linux but no fully functional drivers bare metal, will cost you development and possibly maintenance time, is it worth that to get the performance gain?
You have to ask yourself as well am I using the right platform, and/or am I using the right approach for whatever it is you want to do on that processor that you think or know is too slow. Are you sure you know where the bottleneck is? Are you sure your optimization is in the right place?
You have not provided any info that would give us a gut feel, so you have to go on your gut feel as to what path to take. A different embedded platform (pros and cons), bare metal or operating system. Linux or rtos or other. One programming language vs another, one peripheral vs another, and so on and so on. You wont actually know until you try each of these paths, but that can be and likely is cost and time prohibitive...
As far as the generic title question of os vs bare metal, the answer is "it depends". The differences can swing widely, from almost the same to hundreds to thousands of times faster on bare metal. But for any particular application/task/algorithm...it depends.

protocols in long term evolution(LTE). 4G [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
4G-Lte consists of different layers protocol stack. I have Understand the graph of that stack. One thing i didn't get, are they are protocols or just a layers.If they are protocols, Can i get open Source code for each protocols in C.
I guess by "layers" you mean the PDCP,RLC,MAC,NAS,RRC that you see in the LTE user plane/control plane protocol stacks. Yes, they are protocols between a UE(User Equipment) and the LTE network (eNB, MME, etc), and there are protocol specifications defined for each one in 3GPP. For example, the RRC is defined in 36.331, RLC in 36.322.
I think for some simpler protocols like PDCP or RLC, you can find some open source codings. However, for more complicated protocols like RRC, NAS or MAC, I haven't seen open source.
Actually, LTE is a whole system (or stack). It is devide into different functions, we call it layer. The way layer designed we call it protocol. Protocol is mapped into layer.
SO. NAS, RRC, PDCP, RLC, MAC, PHY is both layer and protocol. Just as Alex Wang sayed, you can find protocol specifications in 3GPP.
And you can find open source code. BUT the quality is not so good.
As references:
http://www.openairinterface.org/
http://openlte.sourceforge.net/
The short answer is you will not obtain C open source code for the protocol stack. There are companies out there that sells you c code (for amazing amounts of money) but they were derived from the SDL diagrams derived from the specs ran through a casetool to generate the C code.
There are ways around this though, by converting the SDL sequences mentioned in the specs and implementing them in a sequential design on a functional programming environment like haskel or erlang. Actually this how manufacturers of network equipment does it.
A protocol stack is a set of protocol layers. The design is such that they are layers with protocols for inter-working between layers / network entities.
The challenge in finding such tools is because LTE standards are evolving very fast and hence it would substantial effort to maintain it inline with the changes for complicated layers.

is there a verilog tutorial where you build a very simple microprocessor? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm a programmer wishing to learn verilog.
What would be amazingly neat would be a tutorial where one constructs a tiny microprocessor with a very clean design, something like an Intel 4004, and then goes on to actually make it using an fpga and gets it to flash LEDs to order.
Is there such a tutorial?
If not, I might have a go at writing one as I try to do it. Has anyone got any recommendations as to resources I might draw on? e.g. nice open source verilog compiler, debugging tools, simulators, verilog tutorials, cheap fpgas and programming tools, breadboards for LEDs, etc.
I found some glorious slides about an elementary microprocessor here:
http://www.slideshare.net/n380/elementary-processor-tutorial
The open source tools are good for development/testing but won't be able to synthesise your hdl to produce a bitstream, you'll need to use one of the manufacturers tools from altera or xilinx (or others).
The manufacturers tools come as suites , are large (5GB install and need 7 to 12 GB drive space) available for windows and linux. altera.com xilinx.com
There are plenty of soft cores out there.
opencores.org would be a good place to have a look at
There is the zpuino which is arduino compatible.
Best idea is start simple and build up
Get a fpga board, implement a simple design (led flasher) and work up from there.
Quite a learning curve especially if you haven't done much digital electronics.
Remember its hardware and your designing circuits not writing code
so timing is everything.
Have a look at the fpga4fun.com projects and work through them
as a starting point.
xilinx based
digilentinc has some low cost boards , as does gadget factory.
avnet has a usb dongle based board for $80.
altera based .
terasic has some nice boards.
Gadget factory has a kickstarter project up at the moment for the paillio + a few addon boards http://www.kickstarter.com/projects/13588168/retrocade-synth-one-chiptune-board-to-rule-them-al
You can play with Verilog without an actual board using the GNU Icarus Verilog. You can get a Windows build from here.
There is a also a tutorial by Niklaus Wirth on how to design and build a simple CPU, with code in Verilog for a Xilinx board:
https://www.inf.ethz.ch/personal/wirth/FPGA-relatedWork/RISC.pdf
https://www.inf.ethz.ch/personal/wirth/FPGA-relatedWork/ComputerSystemDesign.pdf
~Yes, it is the same Wirth that invented Pascal -- he is playing with FPGAs in his retirement.
Not sure about an explicit verilog tutorial, but you might find this class interesting from MIT open courseware:
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-004-computation-structures-spring-2009/
All the class notes are online, and the syllabus sounds like it might be what you're interested in (emphasis mine):
6.004 offers an introduction to the engineering of digital systems. Starting with MOS transistors, the course develops a series of
building blocks — logic gates, combinational and sequential circuits,
finite-state machines, computers and finally complete systems. Both
hardware and software mechanisms are explored through a series of
design examples.
6.004 is required material for any EECS undergraduate who wants to understand (and ultimately design) digital systems. A good grasp of
the material is essential for later courses in digital design,
computer architecture and systems. Before taking 6.004, students
should feel comfortable using computers; a rudimentary knowledge of
programming language concepts (6.001) and electrical fundamentals
(6.002) is assumed.
The problem sets and lab exercises are intended to give students
"hands-on" experience in designing digital systems; each student
completes a gate-level design for a reduced instruction set computer
(RISC) processor during the semester. Access to workstations as well
as help from the course staff is provided in the lab but it is
possible to complete the assignments using Athena machines or one's
home computer.
Altera has great resources on this kind of stuff.
You can try out this link:
http://www.altera.com/education/univ/materials/digital_logic/labs/unv-labs.html
There's a series of lab tutorials that goes through making an embedded processor using Verilog/VHDL.
All of the FPGA vendors have inexpensive ($200~250 range) development kits. For example, the SP601 from Xilinx or the Cyclone III Starter from Altera. I personally own an SP605 (~$500) from Xilinx. You may be able to find cheaper options from other options (e.g. Sparkfun).
Strictly speaking, while you can find open source VHDL/Verilog tools, I am not aware of any such tools for synthesis (making something the FPGA will use). Both Xilinx and Altera provide free (as in beer) tooling, but they are not open or free (as in libre) software. The Xilinx tools include a simulator (limited in the free version) and can run on Windows or Linux. I assume the Altera tools are similar, but I am not familiar with them.
Building a simple microprocessor in Verilog/VHDL is a pretty common feature in college computer architecture classes. You can undoubtedly find class notes and the like from pretty much any major school.
There is an excellent open source verilog compiler, Icarus. From the Icarus web page
Icarus Verilog is a Verilog simulation and synthesis tool. It operates as a compiler, compiling source code written in Verilog (IEEE-1364) into some target format.
I am not aware of a microprocessor-in-verilog tutorial, but there is the OpenCores web site. In the Processors tag under Projects, I see many processors implemented in Verilog or VHDL: 8080, 6502, 8051, Z80, 6805, to name a few. I assume one of these would serve you as an example to get you started.

How important for programming skills is to have nice gadgets? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This question was asked by Ed Burns in his book 'Riding the Crest'. I remember that almost all of the rock star programmers found helpful if one had new and kool gadget. Programmer stays in touch with the latest design, hardware and software implementation which may affect also his work.
What is your opinion on this question?
New gadgets are useful if they expand your horizon.
For example, i recently got myself an iPod touch; this has deeply changed my appreciation for touch-screen user interfaces -- i only knew "point of sales" touchscreen interfaces, which are usually horrible.
I believe it is fairly irrelevant.
Firstly, every domain (for example Web, OS X, iPhone, Windows) has its own aesthetics which means experience from gadgets won't necessarily transfer that well, in the same way a great Windows UI won't necessarily be a great OS X interface.
And owning a gadget hardly ever teaches about the underlying hardware or software implementation.
However, being able to appreciate great design, whereever it appears, whether that is in gadgets, literature or architecture has to be useful. And a curiosity about the world and a determination for life to be better will probably often lead to great programmers getting gadgets, however this is a case or correlation not being the same as causation. The gadgets don't help the programming skills, but the same traits drive both.
I think what Burns might be getting at their is exposure to other design paradigms. If you are programming in Windows and you get the latest and greatest WinMo phone, you're exposed to a different platform but really it's just a baby Windows. Contrast that with being a Windows programmer and getting an iPhone or a G1. You're being shown a very different way to get things done and you'll be able to pick up the parts you like out of someone else's vision.
There's a competitive aspect to many fields that software is often lacking. Competition helps you by showing you how other people solved the problem that you're looking at. If they are selling like gangbusters and you aren't, well, something's up there huh?
Gadgets aren't so important, the PC itself is. Having a fairly new PC, with a nice screen, keyboard and mouse is a must. You are using them most of the day after all, so no point spending loads on the PC and getting cheap peripherals!
For me it's all about keeping things interesting, as I can get bored working on the same thing over and over.
Having a new gadget gives you something new to play with, thus increasing enthusiasm and helping to pick up new things, in turn making you a better developer.
I guess not everyone needs that motivation, but I find it can help during a lull, and it doesn't even need to be new hardware, I'm just as happy to pick up a new bit of technology / language etc, I find it has the same effect.
I'm not a big fan of all the gadget craze. I always try to stay current with new tecnologies but I don't think that consuming gadgets has anything to do with it.
Cool gadgets are a good excuse to spend money and increase your cool factor.
Depends on the programmer. Many programmers would be happy with cool gadgets as a job perk, but I wouldn't say it affects their productivity directly. If I had to choose, I'd rather get a good chair than a palmtop of the same price.
Things I've missed while working as a programmer in various companies of all sizes:
A decent chair (jesus people)
A good, fast computer (even if they don't work 3D)
A large screen (two if possible)
A hand-held device capable of reading mail (I suppose this would fit as a 'gadget')
Depends what you're working on. I'd say that if you're doing UI work, have lots of diverse UIs to play with. Make sure they have a Mac and a PC, maybe one or two different kinds of smartphones and/or a PDA -- if you're that kind of company, maybe even a Nintendo Wii in the breakroom.
If I can program on the gadget sure.
I get considerable less(for programming) if I don't get to program [on] it.
It's a self-image maintenance thing. Having the latest geekbling helps make one feel like the sort of wired.com poster boy who's on top of all the trends, which motivates one to keep on top of the trends.
Really, almost anything you see people doing that seems somewhat inexplicable is probably an identity maintenance activity.

How does off-the-shelf software fit in with agile development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Maybe my understanding of agile development isn't as good as it should be, but I'm curious how an agile developer would potentially use off-the-shelf (OTS) software when the requirements and knowledge of what the final system should be are changing as rapidly as I understand them to (often after each iteration of development).
I see two situations that are of particular interest to me:
(1) An OTS system meets the initial set of requirements with little to no modification, other than potential integration into an existing system. However, within a few iterations of development, this system no longer meets the needs without rewriting the core code. The developers must choose to either spend additional time learning the core code behind this OTS software or throw it away and build from scratch. Either would have a drastic impact on development time and project cost.
(2) The initial needs are not like any existing OTS system available, however, in the end when the customer accepts the product, it ends up being much like existing solutions due to requirement additions and subtractions. If the developers had more requirements and spent more time working on them up front, this solution could have been used instead of building again. The project was delivered, but later and at a higher cost than necessary.
As a software engineer, part of my responsibilities (as I have been taught), are to deliver high-quality software to the customer on time at the lowest possible cost (among other things). Agile development allows for high-quality software, but in some cases, it might not be apparent that there are better alternatives until it is too late and too much money has been spent.
My questions are:
How does off-the-shelf software fit in with agile development?
How do the agile manager and agile developer deal with these cases?
What do the agile paradigms say about these cases?
Scenario1:
This can occur regardless off the OTS nature of the component. Agile does not mean near-sighted.. you'd need to know the big chunks.. the framework bits and spend thinking time on it beforehand. That said, you can only build to what you know .. Delay only till the last responsible moment.Then you need to pick one of the alternatives and start on it. (I'd Avoid third party application unless the cost of developing it in-house is infeasible.. but that's just me). Prototype multiple solutions to check feasibility with list of known requirements. Keep things loosely coupled (replacable), easy to change and full tested. If you reach the fork of keep hacking or rewrite, you'd need to think of which has better value for the business and pick that option. It's comes down 'Now that we're here, what's the best we can do now?'
Scenario2:
This can happen although the chances are slim compared to the team spending 2-3 months trying to get the requirements 'finalized' only to find that the market needs or customer minds have changed and 'Now we want it this way'. Once again, its a question of what is the point of time till which you are prepared to investigate and explore before committing on a path of action. Decide wisely with whatever information you have upto that point.. Hindsight is always 20-20 but the customers wont wait forever. You can't wait till the point of time where the requirements coalesce to fit a known OTS component :)
Agile says Do whatever makes sense and strip out the non-value-adding activities :) Agile is no magic bullet. just my 2 agile cents :)
Not a strict answer per se, but I think that using off the shelf software as a component in a software solution can be very beneficial if:
It's data is open, e.g. an open database or a web service to interact with it
The off the shelf system can customised easily using a similar programming paradigm to the rest of your solution
It can be seamlessly adapted to the rest of your work-flow
I'm a big fan of not re-inventing the wheel, and using your development skills to design the 'glue' between off-the-shelf solutions can be a big win.
Remember 'open' is the important part, and a vendor will often tout their solution as open when it isn't really.
I think I read somewhere that if during an iteration you discover that you have more than 20% more work that you initially thought then you should abandon the sprint and start planning a new one taking into account the additional work.
So this would mean replanning with the business to see if they still want to go ahead with the original requirements now that you know more.
At our company we also make use of prototyping before the sprint to try and identify these kind of situations before they arise on a sprint. Although of course that still may not identify the kind of situation that you describe.
C2 wiki discussion: http://c2.com/cgi/wiki?BuyDontBuild

Resources