I have created a dynamic(.so) library which bundles some functionality for a storage backend that I need.
As it is, it offers a known interface and provides backends for things like memcached, mysql, sqlite... etc.
Now my problem is that my shared library depends on libmemcached, on libsqlite3, on libmysqlclient.. etc., and I don't know how to pack it since clients that only want sqlite wouldn't need to have libmemcached installed.
I've been thinking on splitting it on different libraries but it seems like I'll end up with almost 20 .so libraries and I don't like that idea.
Any alternative?
One alternative is to put an interface within the shared library you made, which allows it to load dependencies at runtime. So, as an example you can have separate
init functions for different components:
init_memcached();
init_sqlite();
You implement these initialization functions using dlopen() and friends.
You can use dynamic loading using dlsym and dlopen.
The advantage of this approach is your application will run fine when the shared library is not found on client side.
You could load only needed shared libraries during run-time, but in my opinion, it is not so good approach.
I would split the shared library, but not into 20 libraries. See if you could group some common functionality.
Related
I just started to use i18next in my monorepo (new to it), which has several serverside microservices and a couple of frontends, including several custom library components. Many of the language strings are shared, and some are app specific. I could not decide on a logical way to approach the problem.
Tooling: Typescript - Node/Express - React/Vite - Electron/React (Desktop)
Firstly, questions:
Where should I keep the language resources during development? In another library? App in monorepo? Under each library module?
From where should I serve them? Something like lang.mydomain.com? Re-dividing them under each app during build (e.g. with Vite)?
All examples/tutorials I could reach use a single app and use i18next.js/ts to be included at the app level. I think I need to wrap it into a library module for my purposes. How can I do that, without losing access to its capabilities/types/methods etc? Dynamically creating instances in a higher-order module (the library is extensive, and I'm nearly lost)?
My initial thoughts:
As many translations will be shared, translating each one will be illogical, they should be shared.
As there can be many languages, it seems using i18next-http-backend is logical for web and embed with i18next-fs-backend for desktop app.
Dividing the resources as common/graphs/tables/ui etc will be logical (these will be divided in a library module hierarchy though).
A logical way can be to include a module's language resources in the module but that would not help translators, they should be at the same place at the top level in this respect.
PS: I used to use react-intl-universal, it is really easy, but it's release schedule is falling back.
Thank you in advance...
I'm looking for best practice defining a project structure in Rust. Let's say I have a project that consists of a client and a server component, and they share some functionality I'd like to move to a separate common library. So all in all, like this:
my_awesome_project
- common [library]
- client [binary] - uses common
- server [binary] - uses common
Obviously, all methods in the common crate used in either client or server must be public. However, I would like to avoid anyone who has the client binary (not the code) being able to call methods from common. Is that possible somehow?
I come from C# background, where common would be a DLL exposing public methods, easily callable. I've read that Rust uses static linking by default, but in my understanding, that does not provide what I am looking for.
And yes, I could double the common code in a private module for both server and client, but that's not optimal either.
To the extent that you have any control over what someone does with your code, Rust's static linking will sufficiently obfuscate the library boundary that you need not worry about the methods being callable.
You will not deliver the library to the user. You will deliver a single binary, hopefully stripped of debug information, which contains the code of client and all the code from common it needs, bundled together.
Unlike C#, there is no separate DLL file, and no metadata that makes it easy to discover and use library functions.
From Puppet Best Practices:
The Puppet Labs documentation describes modules as self-contained bundles of code and data.
Ok it's clear.
A single module can easily manage a single application.
So, puppetlabs-apache manages Apache only, puppetlabs-mysql manages MySQL only.
.... So, my module my_company-mediawiki manages Mediawiki only (i suppose... with database and virtual host... because a module is self-contained bundles of code and data).
Modules are most effective when the serve a single purpose, limit dependencies, and concern themselves only with managing system state relating to their named purpose.
But my_company-mediawiki needs to depend by:
puppetlabs-mysql: to create database;
puppetlabs-apache: to manage a virtual host.
And... from a rapid search I understand that many modules refer to other modules.
But...
They provide complete functionality without creating dependencies on any other modules, and can be combined as needed to build different application stacks.
Ok, a good module is self-contained and has no dependencies.
So I have to necessarily use the pattern roles and profiles to accomplish these best practices? Or I'm confused...
The Puppet documentation's description of modules as self-contained is more aspirational than definitive. Don't read too much into it, or into others' echoes of it. Modules are quite simply Puppet's next level of code organization above classes and defined types, incorporating also plug-ins and owned data.
Plenty of low-level modules indeed have no cross-module dependencies, but such dependencies inescapably arise when you start forming aggregations at a level between that and whole node configurations. There is nothing inherently wrong with that. The Roles & Profiles pattern is a good way to structure such aggregations, but it is not the only way, and in any case it does not avoid cross-module dependencies because role and profile classes, like any other, should themselves belong to modules.
I think it is not feasible or there is no tools enabling this but in theory all needed material is there in a shared library plus what is on the operating system to generate a static library from a shared library.
So do you think it is theorically feasible and if not why?
My issue is with libkrb5 which only provides shared libraries on ubuntu.
Thanks!
I'm writing a .NET 4.0 library that should be efficient and simple to use.
The library is used by referencing it and using its different classes.
Should I use .NET 4.0 Tasks tot make things more efficient internally? I fear that it might make the usage of the library more complex and limited since the users might want to decide for themselves when and where to use tasks and threads.
If your answer depends on the kind of library, here is more information:
The library is Pcap.Net, which is a wrapper for WinPcap and includes a packet interpretation framework.
It only is an issue when the user can 'see' the threading, ie you give out access to data that could be accessed (by you) on another Thread. Probably not a good idea.
But when the parallel processing stays completely inside your application then there is very little chance your users would object.
Should? Dunno. How about giving people an option by providing extension methods that use tasks against the library and push that out in a separate DLL? If you want to use tasks, reference the extension library and go crazy. Otherwise, stick with the core dll.
I believe there are many projects that follow this pattern with Linq. They provide their core library and a separate .Linq.DLL which has extension methods...