I know that Origen supports ASCII file format compilation using its Resources feature. Are there any undocumented API for native timings and levels creation for the V93K?
thx
We do have APIs to natively define timing waveforms in Origen, however these are a recent addition to the framework and it cannot currently render to V93K format.
The natural evolution of that is that OrigenTesters will be updated to be able to take such a definition and render it to V93K, UltraFLEX etc., but no one has done the work to enable that yet.
So, its something that we have the foundation in place for, and many people would like to see it, but there is currently an opening for someone in the community to step up and drive it forward.
By the way, this same timing API drives the waveforms that are applied by the upcoming native Origen simulation solution, so it is a natural evolution that you will be able to apply the exact timing on the ATE that you verified in simulation.
In time, I would also expect to see the ability to import timing from something like STIL format into Origen, just as you can do with regs, etc, and then leverage Origen to do the conversion to the final ATE format.
As ever, lots to do!
Related
Can anyone recommend a library who really support RTL languages such as Hebrew or arabic, and also Supports links, support images, thumbnails/in-place images, Handle large spread-sheets, set filters and page breaks and good performance – very important criteria-
I started to use PDFKit who is really good, but the language support is awful and the data that we work with cannot be manipulated, because of that we need to change the library, the price is not a problem. Someone recommended Docmosis, someone has experience with it?
Thank you very much
Docmosis has customers that have been generating RTL documents for successfully for years. Docmosis supports the image handling requirements you mention and large data sets, but has no direct spreadsheet functionality.
I suggest doing a free trial (please note I work for Docmosis) and asking their support team about specific points of interest. The cloud is the easiest way to test functionality (since you have no install or setup to do) even if you wouldn't choose to use cloud for your actual deployment.
I am planning to build a web dashboard where I can analyze the financial records from a company through graphics, tables, ...
I already have the software, so the dashboard will only read the data, and not manipulate it.
It will be something like this, but simpler. Containing reports, graphics, options to select dates, intervals, etc.
But I am thinking, is it viable to use Clojure? And jQuery, CSS, HTML also.
Currently I work with the Luminus Web Framework for Clojure, but I am wondering if it is worth to do this in Clojure or if there are other languages that are better to do it.
Of course I am familiar with the language already, so it is a pro. But I am also open to suggestions.
It is not that hard at all! In fact, there exist great libraries which solve all the challenges involved in building a dashboard - scheduling, caching, transferring data to the client, visualization(and auto reloading).
We are working on a framework for building realtime Clojure dashboard. Have a look at https://github.com/multunus/dashboard-clj. We have used the following libraries:
Immutant's scheduler for scheduling
Core.async to simplify data flow on the backend
Sente for websocket communication
re-frame for client side state and view management
Stuart Sierra's component library for managing stateful components
In order to create beautiful visualizations you may take a look at d3 or highcharts. CLJSJS and Reagent cookbook will gives a good overview of how to use these js libraries(and many many more).
Clojure is an absolutely fantastic tool for building a web dashboard. The other answers here do a pretty good job of laying out the landscape as far as basic web technologies. On this side of things, I'll simply add I'm a big Reagent / Re-frame fan, and would go that route for React wrapper over Om.
As far as data visualizations, you may be interested in checking out Vega-Lite & Vega, which you can use from Clojure or ClojureScript (Reagent) by using a simple but flexible dataviz library I wrote called Oz:
https://github.com/metasoarous/oz
Vega-Lite & Vega are designed based on the ideas of the Grammar of Graphics, which inspired R's popular ggplot2 library. The core idea is that data visualizations should be built according to declarative descriptions of how properties of the data map to aesthetics of the visualization. Vega-Lite & Vega however take things one step further in providing a grammar of interaction, which allows for the construction of interactive data visualizations and sophisticated explorer views. Moreover, it ups the ante on the declarative nature of the GG in that Vega-Lite and Vega specifications are described as pure data (JSON), making it very in line with the data-driven philosophy of the Clojure world, and paving the way for seamless interoperability with other languages and sharing features.
Vega-Lite is more or less the higher lever, day-to-day data science tool, focusing on providing high leverage and automation based on very spartan specifications. It compiles to Vega, which is a somewhat lower level and more powerful, but less automated version of Vega-Lite. Usually starting out with Vega-Lite, and switching to Vega only as needed is sufficient.
For more on Vega & Vega-Lite see: https://vega.github.io.
I don't see any reasons why it wouldn't be viable to build a web dashboard in Clojurescript.
I suggest that you look into a library call reagent, which provides a minimalistic interface between react and clojurescript, so theoretically everything you can do with react should be possible in clojurescript/reagent (with the added benefit that it will be faster than React). You probably might be interested in reframe which is a framework for building single page applications.
React has been proven as a robust tool to build powerful UI.
You can do everything you can do in JavaScript using ClojureScript (just as you can do everything you do in Java using Clojure). So as others have commented, I would definitely recommend ClojureScript, especially since you know Clojure already. You may find out that you do not need jQuery etc.
The common choice to generate html is to use React.js via a wrapper library like:
reagent
Om
Both can generate HTML.
Reagent (and maybe re-frame) are the easiest ones to get started. Especially since there are components libraries like soda-ash, and a hiccup-like syntax.
Om (by the creator of ClojureScript), and maybe untangled are also a good choice, especially if you need to manage complex data. You can get a hiccup-like syntax via sablono.
Dashboards have been built using it (see the circleCI dashboard as a real-life dashboard example). This is the one I use personally.
Hoplon is also an interesting choice, as you mentioned.
Also have a look at cljsjs for pre-packaged js libraries.
As for the CSS, this is an orthogonal concern but yes of course you can use it (or even less and sass, there are Clojure wrappers for it). You can even generate CSS from Clojure code with garden,
You can find an example project using boot (by the same authors as hoplon), sass, reagent called saapas, but there are many more in the wild.
As you see there are many viable options in ClojureScript to build a dashboard. I am myself building one and settled on Om.next, partly because I was using React.js before.
What are the key benefits of using UML models in software maintenance? I came across only few handful of papers regarding costs and benefits of UML models in software maintenance.
UML can ALWAYS be usefull in software related activity, but you must know what you are doing rather than using it because "my boss says UML is cool!".
Benefits of UML can depend on many factors. In some situations it is likely to be more beneficial. I try to mention some of them.
Likely to be more beneficial:
Larger and complexer is your subject, more benefit you can expect. Especialy when they are relationship between different aspects
If this SW maintenance means some extra development/extension, it could be very useful to use UML to clarify it. You can show existing system and the way it should be extended. You can of course always show the nre reqs.
if your system is already modelled in UML, you can use it to locate the problem and plan further enhancements
if both modeller and model reader know OO and have some experience in UML, they will almost always make it beneficial.
if you want to generate some code further more
if you need to support a system with no documentation, it could be usefull to document it first (use reverse engineering to import the code and organize it in UML)
if you plan to mainain this system for a long time and with lots of people
Maybe not so beneficial:
if the maintenance does not involve intensive extension/programming
if the subject under maintenance is too small - it can be more efficient to use natural languege or even spoken word
if either the person who models or who uses the diagram later is not skilled in OO and UML. If none of them is, forget about using UML and start learning it :)
if you already have some other kind of documentation
Imagine this:
Intro
you are a programmer assigned to do a maintenance of large legacy code base (thousands of files, damn many lines of code, written during several past years by several different people who had already go away, coding conventions and class libraries had changed several times during the original development)
Nobody likes that work, it is boring and repetitive so people are trying to get away as soon as they can and do some fresh and shiny development in some hype cool language. You are a 'junior maintainer' and so are most of your co-workers, so you have no wizard to ask and get magic fast and correct answers to your newbie questions.
The code is structured into many different files, many different classes (e.g. in Java to read the code of 20 classes you have to open 20 different files. In C in order to read the code you always have to read 2 files at the same time *.h and *.c, etc.)
Now you have to troubleshoot some problem. You are able to reproduce it in the test lab so you pick a debugger and start hunting the problem, where does the control and data flow and where is the hidden failure (may be some assert will fail or may be you'll spot something 'oh what a stupid bug' with your hawk eye).
Nightmare
No assert failed and you did not spot anything because it is C++ code and saying "hello" in C++ takes about 10 crypto graphy sentences where the "hello" is just in the middle. Most of the code is not written in the language you know but in a language of macros you have never heard about and calls to methods of classes you have never heard about.
Moreover you find out there are several code flows (threads) running at the same time and the data packet you are trying to track gets somewhere marshaled and queued for another thread to process and the debugger stopped at a waiting lock.
You're trying to picture what is going on using the debugger, patiently stepping into the unknown and taking notes on what the different words mean.
After several days of debugging, pulling your hair, cursing all the developers before you, seriously thinking about quitting this job
Relieve
you find some documentation. Someone wrote it using human words, designed for humans to read, explaining the basic concepts, giving links to places in code and other explaining documents and there are even some pictures in the docs.
And you look at the picture (UML diagram or some other pre-UML diagram) and now you can see where does the data flow, what are some of the classes you have already met supposed to do and where are the places you should look.
And you find that the UML diagrams saved you both many dead neurons but also countless hours of reading the code. As instead of reading and understanding thousands of lines of code you can just read hundreds of lines of explaining documents or read just several explaining pictures.
Conclusion
Powered by that new 'I see' knowledge you put breakpoints at several key places, inspect the variable values and you find that at this place X should not be 1. Because the docs said ... and you already know what are the dependencies of the X variable. So you put several other conditional breakpoints around and you finally find a bug in the logic as now the code flows through some combination of conditions that was not foreseen.
So you change few letters in one line of code, the bug is fixed, you have deserved your salary. The maintenance job is not such a nightmare anymore (though you still think about quitting the job) and the UML pictures helped.
So you draw some more UML-style diagrams explaining various inter dependencies (that you have learned the hard way) first using paper and pencil as small documentation one day is better than no documentation ever http://agilemodeling.com/essays/documentLate.htm
I know the conclusion is partially a sci-fi. As no maintainer will actually go and start creating missing documentation for a system that is going to be replaced in several years by newer and shinier. (s)he will just auto-create some UML diagrams by reverse engineering the code, throw them away after use and quit the job as soon as possible
But you get the picture of benefits of using UML models in software maintenance regarding costs?
I am working on a robotics research project, and would like to know: Does anyone have suggestions for best practices when organizing scientific data and code? Does anyone know of existing scientific libraries with source that I could examine?
Here are the elements of our 'suite':
Experiments - Two types:
Gathering data from existing, 'natural' system.
Data from running behaviors on robotic system.
Models
Description of dnamical system - dynamics, kinematics, etc
Parameters for said system, some of which are derived from type 1 experiments
Simulation - trying to simulate natural behaviors, simulating behaviors on robots
Implementation - code for controlling the robots. Granted this is a large undertaking and has a large infrastructure of its own.
Some design aspects of our 'suite':
Would be good if simulation environment allowed for 'rapid prototyping' (scripts / interactive prompt for simple hacks, quick data inspection, etc - definitely something hard to incorporate) - Currently satisfied through scripting language (Python, MATLAB)
Multiple programming languages
Distributed, collaborative setup - Will be using Git
Unit tests have not yet been incorporated, but will hopefully be later on
Cross Platform (unfortunately) - I am used to Linux, but my team members use Windows, and some of our tools are wed to that platform
I saw this post, and the books look interesting and I have ordered "Writing Scientific Software", but I feel like it will focus primarily on the implementation of the simulation code and less on the overall organization.
The situation you describe is very similar to what we have in our surface dynamics lab.
Some of the work involves keeping measurements data which are analysed at real time, or saved for late analysis.Some other work, on the other hand, involves running simulations and analysing their results.
The data management scheme, which the lab leader picked up at Cambridge while studying there, is centred around a main server which holds the personal files of all lab members. Each member access the files from his work station by mounting the appropriate server folder using NFS. This has its merits and faults. It is easier to back up everything, but is problematic when processing large amounts of data over the net. For this reason i am an exception in the lab, since the simulation i work with generates a large amount of data. This data is saved on my work station, and only the code used to generate it (source code of the simulation and configuration files) are saved on the server.
I also keep my code in an online SVN service, since i can not log into to lab server from home. This is a mandatory practice, which stems from the need to be able to reproduce older results on demands and trace changes to the code if some obscure bug appears. Hence the need to maintain older versions and configuration files.
We also employ low tech methods, such as lab notebooks to record results, modifications, etc.
This content can sometimes be more abstract (no point describing every changed line in the code - you have diff for this. Just the purpose of the change, perhaps some notes about implementations and its date).
Work is done mostly with Matlab. Again i am an exception, as i prefer Python. I also use C for the data generating simulation. Testing are mostly of convergences, since my project now is concerned with comparing to computational models. I just generate results with different configurations, saved in their own respected folder (which i track in my lab logbook). This has the benefits of being able to control and interface the data exactly as i want to, instead of conforming to someone else's ideas and formats.
Can anyone give me an example of successful non-programmer, 5GL (not that I am sure what they are!), visual, 0 source code or similar tools that business users or analysts can use to create applications?
I don’t believe there are and I would like to be proven wrong.
At the company that I work at, we have developed in-house MVC that we use to develop web applications. It is basically a reduced state-machine written in XML (à la Spring WebFlow) for controller and a simple template based engine for presentation. Some of the benefits:
dynamic nature: no need to recompile to see the changes
reduced “semantic load”: basically, actions in controller know only “If”. Therefore, it is easy to train someone to develop apps.
The current trend in the company (or at least at management level) is to try to produce tools for the platform that require 0 source code, are visual etc. It has a good effect on clients (or at least at management level) since:
they can be convinced that this way they will need no programmers or at least will be able to hire end-of-the-lather programmers that cost much less than typical programmers.
It appears that there is a reduced risk involved, since the tool limits the implementer or user (just don’t use the word programmer!) in what he can do, so there is a less chance that he can introduce error
It appears to simplify the whole problem since there seems to be no programming involved (notoriously complex). Since applications load dynamically, there is less complexity then typically associated with J2EE lifecycle: compile, package, deploy etc.
I am personally skeptic that something like this can be achieved. Solution we have today has a number of problems:
Implementers write JavaScript code to enrich pages (could be solved by developing widgets). Albeit client-side, still a code that can become very complex and result in some difficult bugs.
There is already a visual tool, but implementers prefer editing XML since it is quicker and easier. For comparison, I guess not many use Eclipse Spring WebFlow plug-in to edit flow XML.
There is a very poor reuse in the solution (based on copy-paste of XML). This hampers productivity and some other aspects, like fostering business knowledge.
There have been numerous performance and other issues based on incorrect use of the tools. No matter how reduced the playfield, there is always space for error.
While the platform is probably more productive than Struts, I doubt it is more productive than today’s RAD web frameworks like RoR or Grails.
Verbosity
Historically, there have been numerous failures in this direction. The idea of programs written by non-programmers is old but AFAIK never successful. At certain level, anything but the power of source code becomes irreplaceable.
Today, there is a lot of talk about DSLs, but not as something that non-programmers should write, more like something they could read.
It seems to me that the direction company is taking in this respect is a dead-end. What do you think?
EDIT: It is worth noting (and that's where some of insipiration is coming from) that many big players are experimenting in that direction. See Microsoft Popfly, Google Sites, iRise, many Mashup solutions etc.
Yes, it's a dead end. The problem is simple: no matter how simple you make the expression of a solution, you still have to analyze and understand the problem to be solved. That's about 80-90% of how (most good) programmers spend their time, and it's the part that takes the real skill and thinking. Yes, once you've decided what to do, there's some skill involved in figuring out how to do that (in a programming language of your choice). In most cases, that's a small part of the problem, and the least open to things like schedule slippage, cost overrun or outright failure.
Most serious problems in software projects occur at a much earlier stage, in the part where you're simply trying to figure out what the system should do, what users must/should/may do which things, what problems the system will (and won't) attempt to solve, and so on. Those are the hard problems, and changing the environment to expressing the solution in some way other that source code will do precisely nothing to help any of those difficult problems.
For a more complete treatise on the subject, you might want to read No Silver Bullet - Essence and Accident in Software Engineering, by Frederick Brooks (Included in the 20th Anniversary Edition of The Mythical Man-Month). The entire paper is about essentially this question: how much of the effort involved in software engineering is essential, and how much is an accidental result of the tools, environments, programming languages, etc., that we use. His conclusion was that no technology was available that gave any reasonable hope of improving productivity by as much as one order of magnitude.
Not to question the decision to use 5GLs, etc, but programming is hard.
John Skeet - Programming is Hard
Coding Horror - Programming is Hard
5GLs have been considered a dead-end for a while now.
I'm thinking of the family of products that include Ms Access, Excel, Clarion for DOS, etc. Where you can make applications with 0 source code and no programmers. Not that they are capable of AI quality operations, but they can make very usable applications.
There will always be "real" languages to do the work, but we can drag and drop the workflow.
I'm using Apple's Automator which allows users to chain together "Actions" exposed by the various applications on their systems.
Actions have inputs and/or outputs, some have UI elements and basic logic can be applied to the chain.
The key difference between automator
and other visual environments is that
the actions use existing application
code and don't require any special
installation.
More Info > http://www.macosxautomation.com/automator/
I've used it to "automate" many batch processes and had really great results (surprises me every time). I've got it running builds and backups and whenever i need to process a mess of text files it comes through.
I would love to know whether iHook or Platypus (osx wrapper builders for shell scripts) could let me develop plugins in python ....
There is definitely room for more applications like this and for more support from OSX application developers but the idea is sound.
Until there's major support there aren't many "actions" available, but a quick check on my system just showed me an extra 30 that i didn't know i had.
PS. There was another app for OS-preX called "Filter Tops" which had a much more limited set of plugins.
How about Dabble DB?
Of course, just like MS Access and other non-programmer programming platforms, it has some necessary limitations in order that the user won't get him or herself stuck... as John pointed out programming is hard. But it does give the user a lot of power, and it seems that most applications that non-programmers want to build are database-type applications anyway.