Is there any difference on how to declare a ClojureScript dependency on shadow-cljs.edn file? - dependency-management

I have been working on a Clojure/ClojureScript project and something intrigues me.
On the shadow-cljs.edn file, there is a declaration of the
dependencies. As you might see below, some of them have "a full name"
declaration, indicated as username/repository-name. An example is
venantius/accountant.
Others are declared only as repository-name, such as [bidi "2.1.5"] which is actually published by juxt user (source).
I am afraid this could be problematic since multiple users could create repositories with the same name:
{:source-paths ["src" "dev" "test"]
:dependencies [
;; for deploy w lein deps below need to be in project.cljs
;; third-party dependencies
[venantius/accountant "0.2.5"]
[bidi "2.1.5"]
[cljs-hash "0.0.2"]
[clova "0.46.0"]
[com.andrewmcveigh/cljs-time "0.5.2"]
[org.clojure/core.match "1.0.0"]
[binaryage/dirac "RELEASE"]
[com.pupeno/free-form "0.6.0"]
[garden "1.3.10"]
[hickory "0.7.1"]
[metosin/malli "0.8.4"]
[medley "1.4.0"]
[binaryage/oops "0.7.0"]
[djblue/portal "0.16.1"]
[djblue/portal "0.18.0"]
[proto-repl "0.3.1"]
[reagent "1.1.0"]
[re-frame "1.2.0"]
[district0x/re-frame-window-fx "1.1.0"]
[cljsjs/react-beautiful-dnd "12.2.0-2"]
I am not sure how the low-level of dependency installation goes in a Clojure/ClojureScript project.
Is it a bad practice to have only the brief name of dependency? Is an ambiguity problem feasible or even possible?

Until not too long ago it was allowed to publish dependencies to https://clojars.org without a group name. In those cases the group would become identical to the artifact id. So bidi is effectively bidi/bidi.
Nowadays, new packages may only be published with a specific group name. However, old packages may continue using their older name.
The names used to publish also do not need to match their github repo coordinates. These are separate systems. They often match but are not required to.
To anwer your question: You should avoid using the same dependency multiple times. And you should use the official published name for each library. Some libraries are still updated using their old identifiers. Some moved to the newer longer names, while the old ones are still available but no longer receiving updates. Always consult the documentation of the specific libs to be sure which one you are supposed to use. They'll usually have some kind of info in their READMEs.
Conflicts may happen if you get the "same" lib via different identifiers. These may be very difficult to identify, when you run into trouble. This is true for any dependency resolver your use (eg. project.clj, deps.edn, shadow-cljs.edn). Best practice is to keep your dependencies as clean as possible.

Related

Why Is Doppl Trying To Pull in ReactiveStreams?

I am attempting to convert parts of an Android app to iOS using Doppl, and I am getting a strange result: Doppl keeps trying to pull in android.arch.lifecycle:reactivestreams, even though I don't want it to.
Specifically, in app/build/j2objcSrcGenMain/android/arch/lifecycle/, there is a reactivestrams/ subdirectory with R.h and R.m files in it. This seems to make Xcode cranky and may explain why I had some oddities with pod install.
My app/build.gradle has compile "android.arch.lifecycle:reactivestreams:$archVer", because my activity is using LiveDataReactiveStreams.fromPublisher(). However:
The activity is not in the translatePattern (and since its code is not showing up in app/build/j2objcSrcGenMain/, I have to assume that the translatePattern is fine)
I do not have a doppl statement related to reactivestreams, because there does not appear to be a Doppl conversion of this library (nor should it be needed here)
AFAIK, nowhere else in this app am I referring to LiveDataReactiveStreams, which AFAIK is the one-and-only public class from the reactivestreams library
So, the questions:
What determines whether Doppl creates R.h and R.m files for some dependency? It's not the existence of a doppl statement, as I have doppl statements for a lot of other dependencies (RxJava, RxAndroid, Retrofit) and those do not get R.h and R.m files. It's not whether the dependency is referenced from generated code, as my repository definitely uses RxJava and Retrofit, yet there are no R files for those.
How can I figure out why Doppl generates R.h and R.m for reactivestreams?
Once I get this cleared up... do I re-run pod install, or is there some other pod command to refresh an existing pod with a new implementation?
Look into 'app/build/generated/source/r/debug' and confirm there's an R.java being created for the architecture component. It'll be under 'android/arch/lifecycle/reactivestrams'.
I think there are 2 problems here.
Problem 1
Somehow Doppl/J2objc is of the opinion that this file should be transpiled. It could be either that 'translatePattern' matches with it, or that something in the shared code is referencing it. If you can't figure out which, please post a comment and I'll try to help (or post in slack group).
Problem 2
Regardless of why that 'R.java' is being sucked into the translate step, because of how stock J2objc is configured, the code is being generated with package folders instead of creating One Big Name. That generated file should be called 'AndroidArchLifecycleReactivestramsR.h' (and AndroidArchLifecycleReactivestramsR.m). Xcode really doesn't like package folders. That's why there's a slightly custom J2ojbc being used with Doppl, so we can have files with big names instead of folders.
In cases where you intentionally use package names that match with what J2objc considers to be "system" classes, you need to provide a header mapping file to force long names. The 'androidbase' doppl library needs to add a lot of files that are in the 'android' package, which J2objc considers "system". We override those names in the mapping file.
build.gradle
https://github.com/doppllib/core-doppl/blob/master/androidbase/build.gradle#L19
mapping file
https://github.com/doppllib/core-doppl/blob/master/androidbase/src/main/java/androidbase.mappings
I screwed up.
In my dopplConfig, I have:
translatePattern {
include '**/api/**'
include '**/arch/**'
include '**/RepositoryTest.java'
}
In this case, **/arch/** not only matches my arch package, but also the arch package from the Architecture Components.
Ordinarily, this would not matter, because the Architecture Components source code is not in my project. But, R.java gets generated, due to resources, and the translatePattern includes generated source code in addition to lovingly hand-crafted source code. So, that's where my extraneous Objective-C was coming from.
Many thanks to Kevin Galligan for his assistance with this, out on the #newbiehelp Doppl Slack channel!

In Flow NPM packages, what's the proper way to suppress issues so user apps don't see errors?

If you use something like $FlowIssue it's not guaranteed to be in everyone's .flowconfig file. If you declare a library interface, that seems to only work for the given project, not in other projects that import your package (even if you provide the .flowconfig and interface files in your NPM package).
Here's the code I'm trying to suppress errors for in apps that use my package:
// $FlowIssue
const isSSRTest = process.env.NODE_ENV === 'test' // $FlowIssue
&& typeof CONFIG !== 'undefined' && CONFIG.isSSR
CONFIG is a global that exists when tests are run by Jest.
I previously had an interface declaration for CONFIG, but that wasn't honored in user applications--perhaps I'm missing a mechanism to make that work?? With this solution, at least there is a good chance that user's have the $FlowIssue suppression comment. It's still not good enough though.
What's the idiomatic solution here for packages built with Flow?
Declaring a global variable
This is the way to declare a global variable:
declare var CONFIG: any;. Instead of any you could/should use the actual type.
Error Suppression
With flow v0.33 they introduced this change:
suppress_comment now defaults to matching // $FlowFixMe if there are
no suppress_comments listed in a .flowconfig
This means that there is a greater chance of your error being suppressed if you use $FlowFixMe.
Differences in .flowconfig between your library and your consumers' code are a real issue, and there is no way to make it so that your code can be dropped into any other project and be sure it will typecheck. On top of that, even if you have identical .flowconfigs, you may be running different versions of Flow. Your code may typecheck in one version, but not in another, so it may be the case that consumers of your library will be pinned to a specific version of Flow if they want to avoid getting errors reported from your library.
Worse, if one library type checks only in one version of Flow, and another type checks only in another version, there may be no version of Flow that a consumer can choose in order to avoid errors.
The only way to solve this generally is to write library definition files and publish them to flow-typed. Unfortunately, this is currently a manual process because there is not yet any tooling that can take a project and generate library definitions for it. In the mean time, simply copying your source files to have the .js.flow extension before publishing will work in some cases, but it is not a general solution.
See also https://stackoverflow.com/a/43852211/901387

Pharo dependency hell

I am trying to develop a simple project in Pharo, and I would like to add its dependencies in Metacello. My project depends on Glamour, Roassal and XMLSupport.
A way to cleanly install my project is to install the dependencies by hand first. Following the book Deep into Pharo one can do
Gofer new
smalltalkhubUser: 'Moose' project: 'Glamour';
package: 'ConfigurationOfGlamour';
load.
(Smalltalk at: #ConfigurationOfGlamour) perform: #loadDefault.
Gofer new smalltalkhubUser: 'ObjectProfile'
project: 'Roassal';
package: 'ConfigurationOfRoassal';
load.
(Smalltalk at: #ConfigurationOfRoassal) load.
Gofer new
squeaksource: 'XMLSupport';
package: 'ConfigurationOfXMLSupport';
load.
(Smalltalk at: #ConfigurationOfXMLSupport) perform: #loadDefault.
and then my project will work fine.
I have tried to create a ConfigurationOfMyProject using the Versionner, and I have added Glamour, Roassal and XMLSupport as dependencies, using the version that are currently installed in my image (2.6-snapshot, 1.430 and 1.2.1 respectively).
The problem is that I am not able to load my project using Metacello in a fresh image. The project loads fine, but whenever I try to load my classes I get method missing errors in Glamour. Moreover, it is apparent that something is different, because even the world menu has different entries.
I have tried other combinations of versions, including using the stable Glamour (2.1) but I have obtained more errors, including not even being able to open the project in the Versioner (it complains about a missing Roassal name).
What is the correct way to add these dependencies cleanly?
First of all I want to highlight that if configuration is in class ConfigurationOf<proj-name> you can load it as using #configuration message:
Gofer new
smalltalkhubUser: 'Moose' project: 'Glamour';
configuration;
load.
(Smalltalk at: #ConfigurationOfGlamour) perform: #loadDefault.
A I don't see your project, I can just suggest you to write configuration by hand. There is an easy tutorial called Dead simple intro to Metacello.
According to your description it should be something like:
baseline01: spec
<version: '0.1'>
spec for: #common do: [
spec blessing: #release.
spec repository: 'your repo url'.
spec
package: 'YourPackage' with: [
spec requires: #('Glamour' 'Roassal' 'XMLSupport') ].
"also maybe you have a couple of packages that depend on different projects"
spec project: 'Glamour' with: [
spec
className: 'ConfigurationOf Glamour';
repository: 'http://smalltalkhub.com/mc/Moose/Glamour/main';
version: #'2.6-snapshot' ].
spec project: 'Roassal' with: [
spec
className: 'ConfigurationOfRoassal';
repository: 'http://smalltalkhub.com/mc/ObjectProfile/Roassal/main';
version: #'1.430' ].
"and same for XMLSupport" ].
Also you can try to load #development versions, as I have an impression that projects like Roassal and Glamour have very outdated stable versions. Also please note that Roassal2 is actively developed and will replace original Roassal in Moose platform, maybe you want to consider using it.
I would seriously discourage writing configs by hand - that is the assembly code of Metacello ;) Unless you are working on cross-Smalltalk-platform projects with platform-specific code (e.g. code for Pharo 1.4 vs Squeak 4.5) - an area which hasn't been explored yet, Versionner is the tool for you. I have written dozens of configs with it and have yet to run into a roadblock.
When you say you added them as dependencies, did you just add them as projects in the "Dependent projects" pane?
If so, you also have to specify which of your project's packages depend on them. To do this, you select the relevant package of your project on the "Packages" pane.
Now, click on the edit button with the pencil icon that just became enabled. In the dialog that appears, click the green + button and add the external projects of interest.
It looks like you are trying this in an old version of Pharo?
Roassal has been superseded by Roassal2, and the support for XML is on smalltalkhub, split into ConfigurationOfXMLWriter and ConfigurationOfXMLParser, both in PharoExtras.
If you load the right groups from Glamour you don't need to describe the dependencies on Roassal, as Glamour already depends on Roassal(2). That explains your cyclic dependency.
You have also run into the problem we've recently talk about on the pharo mailing lists
that #stable is not a usable symbolic version name. In the Seaside/Magritte/Grease projects we've changed to using #'release3.1' style symbolic version names. That ensures that there is less of a ripple effect when progressing stable.
Snapshot versions should never be a dependency, they just describe what is loaded at the moment, and are basically not upgradable.
[edit]
Metacello by default tries to be smart about not installing older versions over newer. This works pretty well as long as things are not moved to different packages. So it's a bit of bad luck there that you ended up with a non-working combination.
Metacello support complex workflows, and different smalltalk projects (need to) use different workflows. It often takes some time to reach consensus on the best way to do things.
Roassal does not depend on Glamour, but you can create the cycle in your own configuration :)
Packages were moved from squeaksource to ss3 and smalltalkhub because the server had had stability problems. More recently, those problems seem to have been fixed. The xml support was split as it was noted that a lot of applications actually don't need both writing and reading of xml.
Once you have a working configuration, it might be a good idea to build and test it on the continuous integration server of pharo: http://ci.inria.fr/pharo-contribution
If not your actually application, at least the open source parts as used by you. That way the Pharo, Glamour & Roassal teams can know if a change they make breaks something.

Node.js/npm - dynamic service discovery in packages

I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .

What should I do when I add a new config parameter to config file of my package?

I am dealing with making some packages of some projects.
Assume I have a config file like that in my project.
name=foo
mail=foo#foo.com
After installation user edits config file with his/her information.
name=user
mail=user#somedomain.com
When a update comes, for the purpose of not ruin the users config file, I do not replace conf file with new ones as all packages should do.
There is no problem up to this point.
What if I add a new parameter to my config file? For example,
name=foo
mail=foo#foo.com
age=23
If I replace config file with new one, user will lost its settings. If I don't, my new parameter could not be used. I wonder what is the general procedures for this conditions? My question is valid no matter what package types it is (i.e. rpm, deb or tbz).
#William Pursell: Just because you don't see the problem, that does not mean that there isn't a problem.
This definitively is a problem and it has plagued me since I maintain deb packages. For example: many configuration files contain commented configuration items and other comments that the package user is supposed to read and understand before applying his configuration changes. If, in the normal course of software development, there are new configuration items, new default values, or different semantics to existing ones, the comments have to be adapted. This is the package maintainer's job. But at the same time, the package must not mess with the configuration changes already applied by the user.
When I do this in Debian/Ubuntu, the package user is confronted with this intimidating question:
Configuration file `/etc/...'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** ... (Y/I/N/O/D/Z) [default=N] ?
for every single file. That is, for some package upgrades, the user has to type yes/no/maybe :-) many times, every time. Fact is that the package user usually does not know what this is all about. She has to dig into the files, diff versions, and do some guessing in order to figure out a reasonable answer. An answer, by the way, that the package maintainer could have made already, if the packaging system would allow it.
I recognize that there may not exist a general solution to this problem. But I'd love to hear how other package maintainers cope with the situation.
I'm not sure I see the problem. As long as the software can handle the absence of the field in the config file (ie, use a reasonable default), then there is no difference in the two scenarios you describe. If you software cannot handle the absence of the field, I would argue that is a bug.

Resources