We are working on a data logging code and I would like to make a library of it.
The only thing that different programs will need to define is the type strict for the sample that the want to save. The library will save a sample every x period. But I don't know exactly how to have an external DTU in the library code? Is possible to declare the DTU as an interface or something similar? There must be a way to do so but not so sure what it is.
CODESYS does not have generics, there is no way around that. Although I am not convinced it would be an elegant or fitting design, there is one way you might be able to do if you really want your type to be visible in your logging library, but it has serious drawbacks.
You can create another library, or more accurately a family of libraries that will all use the same placeholder. Each of these libraries can define a type of the same name. Reference this placeholder from your logging library, so that it will resolve the name. That will allow your library to compile. Then, by controlling which library the placeholder resolves to inside the project, you can select which type is used. The big downside is you cannot use your logging library with two different types inside the same project, as you cannot get a placeholder to resolve to two libraries at the same time.
Personally, I would probably implement the logging library with some sort of log entry writer as a function block that contains size + arrays of bytes and manages the details of actual logging, and then define inherited function blocks in projects/libraries that need logging. Each of these inherited function blocks would give access to the bytes in a typed way (method, exposed reference/pointer) and assign the size based on the data type used (SIZEOF in FB_Init, for instance). This requires a bit of code for each type, but really not much, and new types are easy to add with a 20-second copy+paste+modify.
Related
We are currently working on performance issues with our provided OData interface, since the UI5 issues a read request with multiple expand paths attached. Due to the generic handling of the request by the framework this leads to an additional processing per expand option, which we need to prevent.
Reading the blog about this topic there seems to be a way to overwrite the generic handling somehow:
https://blogs.sap.com/2018/03/19/sap-cloud-platform-sdk-for-service-development-create-odata-service-7-more-navigation-read-create-expand-sqo/
In this case it is us who need to decide if we can afford to rely on the FWK-functionality. Of course, such generic support cannot be performant. But for small amount of data it is just nice to get it for free.
Stay tuned to learn how to overwrite such generic FWK-functionality with own specific implementation.
However, there is no further blog post on this and looking through the framework, my only idea to overwrite this would be to configure and use an own com.sap.gateway.core.api.provider.data.IDataProvider implementation which handles the request in a custom way, although this would be an immense workaround.
So the questions is if there is some leaner or easier approach to overwriting this functionality which I missed?
UPDATE:
I was update to create a custom data provider and register it with the RuntimeDelegate after servlet initialization. This custom data provider would then check for a custom annotation on the mapped method handler to see if expand should be handled or not. If not it will just read the entity, but not perform he generic expanded read. This works more or less fine, but what is of course missing is a way to pass the properties to be expanded in the ReadRequest. So far only a static implementation is possible solving our performance problem, but I would gladly have a hint if there is another, better solution for this...
At the time of this writing, no better approach exists at the moment.
I understand how to use the FlatBufferBuilder and specific type builder (e.g., MyNestedTableBuilder) to get the WIPOffset and then use that to get the finished_data buffer (&[u8]). I then have been using get_root to get an object based on the buffer, so now I have an instance of MyNestedTable. Then I need to pass that to another function and create a new table instance via a new builder, MyTable, that has the field add_my_nested_table. I cannot see how to do this without unpacking MyNestedTable and rebuilding it again (which seems very inefficient). I am sure there is a good way to do this, I just haven't found it, even after studying the generated code and API.
Generally we have a need to pass objects around and reuse them, over the network or via API calls in Rust.
MyNestedTable isn't really an object, it is a handle to data inside the serialized data (your [u8]), and any field accesses look up this data on the fly.
None of the base APIs for any of the FlatBuffers supported languages (including Rust) have code generated that allows automatic re-serializing, since that is not a frequent operation in most use cases (you already have the serialized data).
The way to do it is through the optional "object API", supported in C++ and some other languages, but not yet in Rust. As you can see, CasperN is working on such an API.
Until then, you may consider using nested_flatbuffer or some other construct to directly pass the serialized data to wherever it needs to go.
It might seem like an odd question but I am building a module that abstracts out certain logic for different data storage options. The Idea is that anyone using the module could use it with MongoDb or Redis or SQL or ( insert whatever option you want here )
I have a basic interface I am following in each of my implementations by exporting the same function names and signature just with different implementations for each of the various data storage options.
Right now I have a something like helper = require(process.env.data_storage_helper)
Then the helper can be used the same way.
Is this bad practise and if so why? Is there a better or suggested way to accomplish this kind of abstraction?
This isn't technically bad practice, but I would actually add a level of indirection. Instead, have those options stored in configuration files that get picked based on NODE_ENV or another environment variable. Then use the same key in the configuration object no matter what. A good example of a framework employing this is kraken.js, which auto-loads a configuration file based on NODE_ENV.
You can then grab a handle on the configuration object after Kraken has started up (or whatever you end up using - it uses confit under the hood - you can always just use this library directly), and you can grab the "data_storage_helper" key to see what your store is backed by within a storage module that does the decision making.
The big pro of this approach is that, now if you'd like to change the data storage or any other behavior of another module, you can just update a JSON file. :-)
Summary
Is there something like http://en.wikipedia.org/wiki/E_programming_language as a DSL inside of Clojure?
Background
I'm aware of:
http://bit.ly/N4jnTI and http://bit.ly/Lm3SSD
However, neither provides what I want.
Context
I'm a big fan of both capability systems and information flow. And I'm wondering if anyone has developed Clojure DSLs for these two techniques. The following would be ideal:
all objects have some tag (say in it's meta table) that lists who has read access to the object
when I want to run a query as user "foo", I set some context var saying "now, use only the capabilities of foo" -- then the function, when it tries to reach objects, either gets the object (if foo has access to it) or nil (if foo does not have access to it). Leaking information bout the existence of objects is not a big deal to me at the moment.
Question
So the question is -- is this something easy to do as a Clojure DSL? Where each object has some capability tag, and we can execute pieces of function/code under certain tags, and the runtime system makes sure that no one gets access to things they're not supposed to access.
Thanks!
you can do this with metadata and preconditions and then create macros to add a DSL/syntax to it, though I would recommend skipping the macros and going for just preconditions and metadata.
Each object would have a piece of metadata with a list of it's capabilities.
Each function would have a precondition that checked the metadata.
Has anyone seen metamorphic code -- that is, code that generates and runs instructions (including IL and Java Bytecode, as well as native code) -- used to reduce boilerplate code?
Regardless of the application or language, typically one has some database code to get rows from the database and return a list of objects. Of course, there are countless ways of doing this based on your database connector. You might end up accessing the cells of the row by index (awkward, because changing "SELECT Name, Age" to "SELECT Age, Name" would break your code, plus the indexes obfuscate meaning), or using myObject.Age = resultRow.getValue("Age") (awkward, because this involves simply going through every field to set its data based on the columns).
Keeping with the database theme, LINQ to SQL is awesome. However, defining data models is less awesome, especially when your database has so many tables that SSMS can't list all of them in the object browser. Also, it's not the stored procedure writing or the SQL involvement that I dislike; just the connection of objects to database.
Someone at the company at which I intern wrote a really awesome method from our SqlCommand class (which inherits from the System one) that uses .NET reflection, with System.Reflection.Emit, to generate a method that would set fields (decorated with an attribute containing the name of the column) on any model object with a nullary constructor. I would consider this metamorphic because a specific part of the program writes new methods.
This pattern of generating objects from the database is just one example. One which I came across two days ago was databinding support for SWT (via JFace). I made these perfectly clean models with setAddress(Address address) and getName() and now I have to pollute the setters with PropertyChangeSupport fire-ers and carry around a PropertyChangeSupport instance (even if it is just in an abstract base class)! Then I found PojoBindables and now I feel like a level 80 databinder, simply because I need to write less.
Specifically, things that use native code with something like this or a Java Agent would be really sweet.
Generic programming might up your alley. The Concept C++ website has a really good tutorial that covers abstraction and lifting, ideas that can be used in any language and turn boilerplate code into a positive force. By examining a bunch of boilerplate methods that are almost exactly the same, you can derive a set of requirements that unite the code conceptually ("To make X happen you must do Y, so make X1 happen you must do Y with difference 1"). From there you can use a template to capture the commonalities, and use the template inputs to specify the differences. C# and Java have their own generics implementations at this point, so it might be worth checking out.