Im trying to accomplish a twincat 3 library which does things using global constants defined in the main project, like creating arrays the size of those constants and cycling trough them. However I've been unsuccessful and I wonder if this can be done. I just get this error "Error 4 Border 'cPassedConstant' of array is no constant value" when I try to build the main project. The error comes from the array defined in the library.
I've tried making a GVL with a constant of the same name to the library and then setting the "external implementation" property true but that does not help.
My goal here is to make a IO management library with filtering and such. And then I could just add it to the main project and define some constants like "cDigitalIputsCount","cAnalogInputCount" and so on.
Maybe you can get along with the new ARRAY[*] feature instead, although it is still very limited. There is no other way than to define the constant in the library.
The library concept is the same as in other environments. A library provides you reusable components. Your main project depends on the library and not the other way around. Therefore your library cannot know a thing about the project where it is used.
A confusing thing in TwinCat3 is, that you can build projects successful with programming errors inside. The TwinCat3 compiler allows broken code inside a project as long as it is not called. Therefore when you ship libraries you should always use "Check all objects".
You should check Beckhoff's feature called Parameter List. By adding a parameter list to the library project, you can re-define library constants in the project that uses the library. The definition happens in the library manager.
Image from Beckhoff's site:
I think that should do it. Of course, the other option is to use the ARRAY[*] option, which is awesome too (for a PLC programming world). The problem with parameter lists is that it is a project-wide re-definition. Using the ARRAY[*] allows the size be changed dynamically.
I would suggest using a variable length ARRAY[*], as explained in the link below (and also in the Beckhoff/Infosys, section DataTypes/Array).
The point is that you should declare the ARRAY[1..cAINs] of FB_AnalogIO in your main program (it knows the FB_AnalogIO from your analog library and can declare it with a constant size).
The PRG_IO should then be changed to either a function or function block, so that it accepts the ARRAY[*] as a VAR_IN_OUT without knowing the exact size.
https://stefanhenneken.wordpress.com/2016/09/27/iec-61131-3-arrays-with-variable-length/
Related
The two functions mkYesodData and mkYesodDispatch in the Yesod framework are supposed to separate the handler definition and the dispatch process. Though by some miracle (to me), templates use this interesting function "resourcesApp":
mkYesodDispatch "App" resourcesApp
The only mention of this function I have found in hoogle is in the Hledger package. And it is not a yesod dependency.
In the school of Haskell by this link they give an explanation that resourcesApp is "generated" by mkYesodData, although it still does not work for my side.
https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%202
What is the reason for this?
There's some Template Haskell (TH) going on under the hood in Yesod, and I think this is what's confusing you. Template Haskell can be confusing when searching in documentation because it produces values at compile-time for use at runtime that aren't there before the code is compiled. resourcesApp is just one of these values.
In the code you reference, the author describes that you must have another module (which he calls Foundation) in which you have invoked mkYesodData. Indeed, without this other module, the code in the Dispatch module won't work. Strangely, it's not until (Part 4)[https://www.schoolofhaskell.com/school/advanced-haskell/building-a-file-hosting-service-in-yesod/part%204] that he seems to define the Foundation module, but you can see that there is a line:
mkYesodData "App" $(parseRoutesFile "config/routes")
That may not look like it defines a value called resourcesApp, but sure enough, it does.
In short, you should be able to get your code working by just finishing the entire tutorial and running the code altogether.
In case you're wondeering, a call to mkYesodData takes a String and then literally generates code that defines a value names resources**** where the **** is the string you passed. In this case, that would be a value resourcesApp, but in someone else's Yesod project, it could be resourcesFoo. Furthermore, since this resourcesFoo value isn't concretely in the code, projects that use Yesod typically wouldn't have it show up in their export lists or haddock documentation. It's actually very strange that you found even one hit for resourcesApp on hoogle at all, but upon closer examination, it kind of makes sense: Hledger seems to be some sort of extended interface around yesod, so they pre-generated the TH values so that they would be easily accessible to users.
As another note, TH has some restrictions in its use. For one, you typically need to perform the TH invocations ("splices" as they're typically called) in a separate module than the one you use the generated values. This is probably why the author has you create a separate Foundation module rather than just putting the line mkYesodData ... in the Dispatch module.
I'm currently going through my project in Jetbrains Pycharm 2017.1.5, documenting all my python 3.6 classes and methods, and several things stand out to me about the docstring format.
I want to link to other methods / functions / classes from some of the docstrings, but I cannot figure out how to do this. The documentation for restructuredText is very, very extensive, but it doesn't say anything about referencing other docstrings with Pycharm. In fact, the vast majority of the snippets from that page do not even work in Pycharm. (Why is that?)
I've managed to find that you can use :class:`<class_name>` to reference a class, but :class:`<class.method>`does not work, and similarly named constructs like :func:`<func_name>` do not create a hyperlink. I've also seen :ref:`<name>` come up, but that one doesn't work either.
(I would switch to Epytext (it has everything I want, plus it's much simpler) in a heartbeat if not for this error: You need configured Python 2 SDK to render Epydoc docstrings in the Ctrl + Q frame.)
It would also be extremely helpful if there was a way to inherit the docstring in subclasses / overridden methods. Pycharm does this automatically if you leave the docstring blank, which makes me think it is possible to do it manually. But, again, I can't find any information on it.
It's turning out to be mind-blowingly complicated to do something so, so simple. So, any help will be appreciated!
I want to link to other methods / functions / classes from some of the docstrings, but I cannot figure out how to do this.
You're correct that the reStructuredText documentation does not cover this, because it's not a feature of reStructuredText.
Likely you are (explicitly, or implicitly via some tool) using the Sphinx system – a superset of Docutils – to allow (among many other features) references between different docstrings.
Sphinx defines several Docstring “roles” (the :foo: before the backtick-quoted text) for different purposes:
doc, a reference to an entire document.
ref, an arbitrary cross-reference.
… many others.
For specifically Python code, the “domain” py has its own specific set of roles for Python code docstrings:
:py:mod:
Reference a module; a dotted name may be used. This should also be used for package names.
:py:func:
Reference a Python function; dotted names may be used. The role text needs not include trailing parentheses to enhance readability; they will be added automatically by Sphinx if the add_function_parentheses config value is True (the default).
:py:data:
Reference a module-level variable.
:py:const:
Reference a “defined” constant. This may be a Python variable that is not intended to be changed.
:py:class:
Reference a class; a dotted name may be used.
:py:meth:
Reference a method of an object. The role text can include the type name and the method name; if it occurs within the description of a type, the type name can be omitted. A dotted name may be used.
:py:attr:
Reference a data attribute of an object.
:py:exc:
Reference an exception. A dotted name may be used.
:py:obj:
Reference an object of unspecified type.
I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.
I have an erlang program, compiled with rebar, after the new debian release, it won't compile anymore, complaining about this:
-import(erl_scan).
-import(erl_parse).
-import(io_lib).
saying:
bad import declaration
I don't know erlang, I am just trying to compile this thing.
Apparently something bad happened to -import recently http://erlang.org/pipermail/erlang-questions/2013-March/072932.html
Is there an easy way to fix this?
Well, -import(). is working but it does NOT do what you are expecting it to do. It does NOT "import" the module into your module, nor does it go out, find the module and get all the exported functions and allow you to use them without the module name. You use -import like this:
-import(lists, [map/2,foldl/3,foldr/3]).
Then you can call the explicitly imported functions without module name and the compiler syntactically transforms the call by adding the module name. So the compiler will transform:
map(MyFun, List) ===> lists:map(MyFun, List)
Note that this is ALL it does. There are no checks for whether the module exists or if the function is exported, it is a pure naive syntactic transformation. All it gives you is slightly shorter code. For this reason it is seldom used most people advise not to use it.
Note also that the unit of code for all operations is the module so the compiler does not do any inter-module checking or optimisation at all. Everything between modules like checking a modules existence or which functions it exports is done at run-time when you call a function in the other module.
No, there is no easy way to fix this. The source code has to be updated, and every reference to imported functions prefixed with the module in question. For example, every call to format should be replaced with io_lib:format, though you'd have to know which function was imported from which module.
You could start by removing the -import directives. The compilation should then fail, complaining about undefined functions. That is where you need to provide the correct module name. Look at the documentation pages for io_lib, erl_scan and erl_parse to see which functions are in which module.
Your problem is that you were using the experimental -import(Mod) directive which is part of parameterized modules. These are gone in R16B and onwards.
I often advise against using import. It hurts quick searches and unique naming of foreign calls. Get an editor which can quickly expand names.
Start by looking at what is stored in the location $ERL_LIBS, typically this points to /usr/lib/erlang/lib.
I have built a linux shared object which I inject into a 3rd party program to intercept some dynamic function calls using LD_PRELOAD.
The 3rd party program uses a SO "libabc.so" located at some path. My injected SO uses another SO, also called "libabc.so" located at another path (essentially identical but slight code differences).
My problem is now, that calls to a function "def" which appear in both libabc.so are always resolved by the first. (Presumably because it is loaded first?!) How can I get them to be resolved with the second libabc.so?
Many thanks!
Unless something changed since I used to do this, you will need to dlopen() the library you want to pass calls on to and call the function manually, something like;
handle = dlopen("/path/to/libabc.so", RTLD_LAZY);
otherDef = dlsym(handle, "def");
orderDef(parameter);
There is a complete example how to do this very thing at LinuxJournal.
If you only want to use one libabc.so version, you can always use LD_PRELOAD to load it along with your own shared object before anything else.
If you want to use multiple versions, you have a few alternatives:
Use dlopen() in your shared object to load that library. Since you have created a function injection object you should be familiar with this procedure. This is the more generic and powerful way - you could even mix & match functions from different library versions.
Use a different DT_SONAME for the library version your shared object links against. Unfortunately this requires (slightly) changing the build system of that library and recompiling.
Link your shared object statically against the library in question. Not always possible but it does not require modifying the library in question. The main issue with this alternative is that any change in the library should be followed by a relinking of your shared object for the changes to be pulled in.
Warning: you may need to use a custom linker script or specific linker options to avoid symbol conflicts.