How do you use link_with in Cocoapods to minimise the time it takes to do `pod install`? - dependency-management

I just about get what link_with does in Cocoapods. The documentation is very very brief about this option:
Specifies the target(s) in the user’s project that this Pods library should be linked in.
(from http://guides.cocoapods.org/syntax/podfile.html#link_with)
The only examples assume the user isn't using target-specific pods...
link_with 'MyApp'
Ooookay then. So what is the expected behaviour of using this in a target-specific Pods library? Why is the Pods library tied to a specific target and yet has the potential to be linked to others?
e.g. what does the following mean?
target 'FirstTarget' do
link_with 'SecondTarget', 'ThirdTarget'
pod ...
end
I really want to know, because the way I'm currently grouping pods up at the moment makes Cocoapods pod install really slow. Just as an example, here's a bunch of common test-related pods being used in multiple targets:
def test_essentials
pod 'Specta', :git => 'https://github.com/specta/specta.git', :branch => '0.3-wip'
pod 'Expecta'
pod 'OCHamcrest'
pod 'OCMockito'
pod 'OCMock'
end
target 'MyProjSpecs', :exclusive => true do
test_essentials
...
end
target 'MyProjLogicSpecs', :exclusive => true do
test_essentials
...
end
Of course my project itself has more targets — about 6 more. Using this pattern duplicates a lot of target definitions and makes it really unclear what's linked to what. In total it takes about 3 minutes in the Generating Pods Project and Integrating client project phases.
Am I doing something wrong?

Related

BalenaOS: PyGObject NM (NetworkManager?) not found/available inside container

I’m trying to get the device’s network info inside a container running on balenaOS 2.88.4+rev0 (supervisor version 14.3.3) on a Raspberry Pi Compute Module 4 using PyGObject or dbus-python (using Python 3.9.2). I’ve been using these examples, which are also mentioned in this video (I followed along to satisfy other dependencies/prerequisites, listed below).
In order to get this working, there are a few things that need be done:
docker-compose.yml:
set network-mode: "host"
add the following label: io.balena.features.dbus: '1'
(optional?) add:
cap_add:
- NET_ADMIN
the container already had privileged: true, so in this case the cap_add shouldn’t make a difference
add following environment variable: DBUS_SYSTEM_BUS_ADDRESS: "unix:path=/host/run/dbus/system_bus_socket"
add dependencies, I’ve tried both ways, but they both don’t seem to work properly (see later)
install network-manager package in the desired container, this allows for the usage of the nmcli command, which does work for me and shows the correct info. The network-manager should also include libnm, although I’m not entirely sure of this. (In these docs the same implementation is used as in the examples linked above, as well as in the next section.)
In both cases (point 2), here’s what I get:
import gi
→ no problem
gi.require_version('NM', '1.0') (I’m assuming NM stands for NetworkManager?)
→ ValueError: Namespace NM not available
from gi.repository import NM
→ ImportError: cannot import name NM, introspection typelib not found
Off course I’ve tried searching for these errors online, but documentation/information seems extremely sparse.
Running this inside docker on my local machine, or using balena push to push it to my device (local mode enabled for testing), these errors occur.
When I run this in a virtual environment (using Python 3.9.10) on ZorinOS 16.2 (heavily based on Ubuntu 22.04) it works without issues, leading me to believe there’s still some package or setting missing…
What am I missing or doing wrong?

Azure ML release bug AZUREML_COMPUTE_USE COMMON_RUNTIME

On 2021-10-13 in our application in Azure ML platform we get this new error that causes failures in pipeline steps - python module import failures - warning stack <- warning that leads to pipeline runtime error
we needed to set it to false. Why is it failing? What exactly are exact (and long term) consequences when opting out? Also, Azure ML users - do you think it was rolled out appropriately?
Try to add into your envirnoment new variable like this:
environment.environment_variables = {"AZUREML_COMPUTE_USE_COMMON_RUNTIME":"false"}
Long term (throughout 2022), AzureML will be fully migrating to the new Common Runtime on AmlCompute. Short term, this change is a large undertaking, and we're on the lookout for tricky functionality of the old Compute Runtime we're not yet handling correctly.
One small note on disabling Common Runtime, it can be more efficient (avoids an Environment rebuild) to add the environment variable directly to the RunConfig:
run_config.environment_variables["AZUREML_COMPUTE_USE_COMMON_RUNTIME"] = "false"
We'd like to get more details about the import failures, so we can fix the regression. Are you setting the PYTHONPATH environment variable to make your custom scripts importable? If so, this is something we're aware isn't working as expected and are looking to fix it within the next two weeks.
We identified the issue and have rolled out a hotfix on our end addressing the issue. There are two problems that could've caused the import issue. One is that we are overwriting the PYTHONPATH environment variable. The second is that we are not adding the python script's containing directory to python's module search path if the containing directory is not the current working directory.
It would be great if you can please try again without setting the AZUREML_COMPUTE_USE_COMMON_RUNTIME environment variable and see if the problem is still there. If it is, please reply to either Lucas's thread or mine with a minimal repro or description of where the module you are trying to import is located at in relation to the script being run and the root of the snapshot (which is the current working directory).

How Do We Wire Up Converted Unit Tests in Doppl?

I am attempting to replicate what I see in PartyClickerSample in a fresh project, and am having difficulty with the pod and using it from Swift to set up the unit tests.
Based on PartyClickerSample, AFAICT, what I am supposed to do is put a Podfile like this in the iosTest/ directory (that contains a newly-created Xcode project):
platform :ios, '9.0'
target 'iosTest' do
use_frameworks!
pod 'testdoppllib', :path => '../app/build'
end
Then:
In AppDelegate.swift, import testdoppllib and call DopplRuntime.start() from the application() func
In ViewController.swift, import testdoppllib and call runResource() on... something that I can't quite figure out what it maps to
However, I can't even get to the latter bullet, as things start going sideways from the outset.
pod install seems to work as expected:
Analyzing dependencies
Fetching podspec for `testdoppllib` from `../app/build`
Downloading dependencies
Installing testdoppllib (0.1.0)
Generating Pods project
Integrating client project
[!] Please close any current Xcode sessions and use `iosTest.xcworkspace` for this project from now on.
Sending stats
Pod installation complete! There is 1 dependency from the Podfile and 1 total pod installed.
However, when I re-open the workspace:
In the Xcode tree thingy, Pods/Products/ shows testdoppillib.framework in red (which doesn't look good) and also shows Pods_iosTest.framework in black
If I try import testdoppillib, I get a message saying that Xcode does not recognize that name
If I try import Pods_iosTest, Xcode seems to find it, but then it does not recognize DopplRuntime.start()
So, what are the steps, in a Cocoapods-based Doppl setup, for starting the Doppl-created unit tests in Xcode?
Running pod install with set up the testdoppllib framework, but doesn't actually build it. One of the frustrating parts of the cocoapods process is you'll need to run build in Xcode, which should first build testdoppllib, then your Swift code.
To summarize, testdoppllib shows up as red, but it's most likely OK and just needs to be built. Once it's build, your Swift code should see "import testdoppllib"
For runResource, that's a little more complicated.
The Doppl gradle plugin writes a file called dopplTests.txt to the build/j2objcSrcGenTest directory. That's a listing of all the test classes. You need to add that file to Xcode, then pass in that name to DopplJunitTestHelper.runResource.
There's probably a way to set that up with cocoapods, but we haven't done that yet.

Resize container resources online

How can I resize container resources after creation, when it is online? I would like to get a permanent solution for this, that isn't reset when restarting.
I've set resources in creation time with following options:
-c, --cpu-shares=0
--cpuset=""
-m, --memory=""
I have already tried to change values here
/sys/fs/cgroup/
Out of the box you can't do this, containers are meant to be immutable at least configuration wise.
You might be better off using docker-compose to define these parameters so they are always set for a given application.
An untested example docker-compose.yml might look like this :
awesome_app:
cpu_shares: 0
cpu_set: 0
memory: 0

Chef wrapper cookbooks only apply internal cookbook once

I have a cookbook "blah-deploy-nodejs-from-git" cookbook that installs a nodejs codebase from GIT and calls NPM install on the directory. It has the following attributes
git_repo
branch
destination
I have then written cookbooks that wrap that cookook for inidividual sites, that need to get installed. In this particar case "blah-pricing" and "blah-notifications" which have different overriding attributes:
me#me cat cookbooks/blah-svc-pricing/attributes/default.rb
node.override[:blah_deploy_nodejs_from_git][:destination] = "/var/blah/pricing"
node.override[:blah_deploy_nodejs_from_git][:branch] = "master"
node.override[:blah_deploy_nodejs_from_git][:git_repo] = "https://hqdevgit01.blah.lan/micro-services/blah-pricing.git"
me#me:~/chef-repo$ cat cookbooks/blah-svc-notifications/attributes/default.rb
node.override[:blah_deploy_nodejs_from_git][:destination] = "/var/blah/notifications"
node.override[:blah_deploy_nodejs_from_git][:branch] = "master"
node.override[:blah_deploy_nodejs_from_git][:git_repo] = "https://hqdevgit01.blah.lan/micro-services/blah-notifications.git"
And then the recipe is the same in both cases:
include_recipe 'blah-deploy-nodejs-from-git'
Unfortunately it is applying the inner recipe only once even though my node has both cookbooks applied to it. My understanding was that wrapper cookbooks are used to customize a cookbook and make it unique.
Can encapsulate the inner cookbook to two different cookbooks, with different attributes, and have the wrapper cookbooks both apply that inner recipe? OR Am I going to have to completely replicate the code that is in the inner cookbook?
This is due to a basic misunderstanding of how chef works. Recipes are not meant to be a procedure for how to do something, they are meant to be a declaration of what that something should look like. As such, you need to think of them as describing the end state, not the process for getting there.
Thus, chef will never run a recipe twice. And attributes really should not be changed mid run (unless they are updated to indicate something that happens mid run. Luckily, there are other chef capabilities that can solve your problem. You need either a definition or an LWRP (light weight resource provider)
Definitions are just groups of resources that are often repeated. So you can create a definition and then later call it multiple times in the same recipe with different attributes. Much like what you currently are doing with your recipe.
While definitions are sometimes appropriate, LWRPs are generally more powerful, and have become the prefered approach for most repetitive (library like) task in Chef. With LWRPs you will define a new chef primitive (much like file, service, etc), and then write the code for accomplishing the goal of that primitive. You can then use these resources anywhere in your recipes. In your case, you'd have an npm_deployer resource that took attributes for repo, branch, and destination. It would then do the work that is currently in your deployer recipe. Your "wrapper" recipes would then stop calling include_recipe and instead just declare a npm_deploy resource and pass in the attributes needed.

Resources