Starting Node server from Scala app with environment variables - node.js

I am trying to start a Node server from a Scala app using a shell script but when I try to add environment variables I am getting the following error:
Exception in thread "main" java.io.IOException: Cannot run program "TEST_ENV_VAR=fake-key": error=2, No such file or directory
The code looks like this:
import java.nio.file.{Files, Path}
import scala.language.postfixOps
#main def app() =
val path = Path.of(scala.sys.props.get("user.home").get + "/projects")
s"TEST_ENV_VAR=fake-key node $path/node-server/server.js" !!
It looks like it is trying to run the environment variables instead of the server. Is there a way to get it to recognize them correctly? I have been able to run a node server successfully using this method when I don't include the environment variable.
Thank you!

Here is an example that uses the scala.sys.process.Process object:
import java.io.File
import scala.sys.process.*
#main def app() =
val file = File(scala.sys.props.get("user.home").get + "/projects")
Process("node node-server/server.js", Some(file), "TEST_ENV_VAR" -> "fake-key").!!

Related

Jenkins executing python script

I am trying to call python script using Jenkins. I am able to execute the python script over command promt successfully. However when same script is called over Jenkins, I am facing problem of dll. My script is as follow
import isystem.connect as ic
import time
import os
print('isystem.connect version: ' + ic.getModuleVersion())
# 1. connect to winIDEA Application
pathTowinIDEA = 'C:/winIDEA/iConnect.dll'
cmgr_APPL = ic.ConnectionMgr(pathTowinIDEA)
cmgr_APPL.connectMRU('U:/winIDEA/Winidea_cust_config.xjrf.xjrf')
debug_APPL = ic.CDebugFacade(cmgr_APPL)
ec = ic.CExecutionController(cmgr_APPL)
print('APPL Configuration launched')
Error is appearing at line 9:
cmgr_APPL.connectMRU('U:/winIDEA/Winidea_cust_config.xjrf.xjrf')
Error message in Jenkins:
E:6 Can not load iConnect.dll: 'C:/winIDEA/iConnect64.dll'
SystemError: 'The specified module could not be found.
I have verified that iConnect64.dll is present at this path. How to resolve it ?

Unable to run celery task directly but still possible via Python console

I'd like to run a simple test (run a task) first via RabbitMQ and once this is setup correctly, then encapsulate in Docker and run from there.
My structure looks like so:
-rabbitmq_docker
- test_celery
- __init__.py
- celeryapp.py
- celeryconfig.py
- runtasks.py
- tasks.py
- docker-compose.yml
- dockerfile
- requirements.txt
celeryconfig.py
## List of modules to import when celery starts
CELERY_IMPORTS = ['test_celery.tasks',] # Required to import module containing tasks
## Message Broker (RabbitMQ) settings
CELERY_BROKER_URL = "amqp://guest#localhost//"
CELERY_BROKER_PORT = 5672
CELERY_RESULT_BACKEND = 'rpc://'
celeryapp.py
from celery import Celery
app = Celery('test_celery')
app.config_from_object('test_celery.celeryconfig', namespace='CELERY')
__init__.py
from .celeryapp import app as celery_app
run_tasks.py
from tasks import reverse
from celery.utils.log import get_task_logger
LOGGER = get_task_logger(__name__)
if __name__ == '__main__':
async_result = reverse.delay("rabbitmq")
LOGGER.info(async_result.get())
tasks.py
from test_celery.celeryapp import app
#app.task(name='tasks.reverse')
def reverse(string):
return string[::-1]
I run celery -A test_celery worker --loglevel=info from the rabbitmq_docker directory. Then in a separate window I trigger reverse.delay("rabbitmq") in the Python console, after importing the required module. This works. Now when I try to trigger the reverse function via the run_tasks.py i.e. python test_celery/run_tasks.py I get:
Traceback (most recent call last):
File "test_celery/run_tasks.py", line 1, in <module>
from tasks import reverse
File "/Users/my_mbp/Software/rabbitmq_docker/test_celery/tasks.py", line 1, in <module>
from test_celery.celeryapp import app
ModuleNotFoundError: No module named 'test_celery'
What I don't get is why this Traceback doesn't get thrown when called directly from the Python console. Could anyone help me out here? I'd eventually like to startup docker, and just run the tests automatically (without going into the Python console).
The problem is simply because your module is not in the Python path.
These should help:
Specify the PYTHONPATH to point to the directory where your test_celery package.
Always run your Python code in the directory where your test_celery package is located.
Or alternatively reorganise your imports...

aiohttp 1->2 with gunicorn unable to use timeout context manager

My small aiohttp1 app was working fine until I have started preparing for deploying to Heroku servers. Heroku forces me to use gunicorn. It wasn't working with aiohttp1 (some strange errors), and after a small migration aiohttp2 I have started app with
gunicorn app:application --worker-class aiohttp.GunicornUVLoopWebWorker
command and I works, until I have requested view that calls class' async method with
with async_timeout.timeout(self.timeout):
res = await self.method()
code inside and it raises RuntimeError: Timeout context manager should be used inside a task error.
The only thing that changed until last successful version: the fact that I'm using gunicorn. I'm using Python 3.5.2 and uvloop 0.8.0 and gunicorn 19.7.1.
What is wrong and how can I fix this?
UPD
The problem is that I have LOOP variable that store newly created event loop. This variable definition is in setting.py file, so before aiohttp.web.Application creating, code below gets evaluated:
LOOP = uvloop.new_event_loop()
asyncio.set_evet_loop(LOOP)
and this causes error when using async_timeout.timeout context manager.
Code to reproduce the error:
# test.py
from aiohttp import web
import uvloop
import asyncio
import async_timeout
asyncio.set_event_loop(uvloop.new_event_loop())
class A:
async def am(self):
with async_timeout.timeout(timeout=1):
await asyncio.sleep(0.5)
class B:
a = A()
async def bm(self):
await self.a.am()
b = B()
async def hello(request):
await b.bm()
return web.Response(text="Hello, world")
app = web.Application()
app.router.add_get('/', hello)
if __name__ == '__main__':
web.run_app(app, port=5000, host='localhost')
and just run gunicorn test:app --worker-class aiohttp.GunicornUVLoopWebWorker (same error when using GunicornWebWorker with uvloop event loop, or any other combination).
solution
I have fixed this by calling asyncio.get_event_loop when I need to use event loop.

Pyspark running external program using subprocess can't read files from hdfs

I'm trying to run an external program(such as bwa) within pyspark. My code looks like this.
import sys
import subprocess
from pyspark import SparkContext
def bwaRun(args):
a = ['/home/hd_spark/tool/bwa-0.7.13/bwa', 'mem', ref, args]
result = subprocess.check_output(a)
return result
sc = SparkContext(appName = 'sub')
ref = 'hdfs://Master:9000/user/hd_spark/spark/ref/human_g1k_v37_chr13_26577411_30674729.fasta'
input = 'hdfs://Master:9000/user/hd_spark/spark/chunk_interleaved.fastq'
chunk_name = []
chunk_name.append(input)
data = sc.parallelize(chunk_name,1)
print data.map(bwaRun).collect()
I'm running spark with yarn cluster with 6 nodes of slaves and each node has bwa program installed. When i run the code, bwaRun function can't read input files from hdfs. Its kind of obvious this doesn't work because when i tried to run bwa program locally by giving
bwa mem hdfs://Master:9000/user/hd_spark/spark/ref/human_g1k_v37_chr13_26577411_30674729.fasta hdfs://Master:9000/user/hd_spark/spark/chunk_interleaved.fastq
on the shell didn't work because it can't read files from hdfs.
Can anyone give me idea how i could solve this?
Thanks in advance!

Loading python modules in Python 3

How do I load a python module, that is not built in. I'm trying to create a plugin system for a small project im working on. How do I load those "plugins" into python? And, instaed of calling "import module", use a string to reference the module.
Have a look at importlib
Option 1: Import an arbitrary file in an arbiatrary path
Assume there's a module at /path/to/my/custom/module.py containing the following contents:
# /path/to/my/custom/module.py
test_var = 'hello'
def test_func():
print(test_var)
We can import this module using the following code:
import importlib.machinery
myfile = '/path/to/my/custom/module.py'
sfl = importlib.machinery.SourceFileLoader('mymod', myfile)
mymod = sfl.load_module()
The module is imported and assigned to the variable mymod. We can then access the module's contents as:
mymod.test_var
# prints 'hello' to the console
mymod.test_func()
# also prints 'hello' to the console
Option 2: Import a module from a package
Use importlib.import_module
For example, if you want to import settings from a settings.py file in your application root folder, you could use
_settings = importlib.import_module('settings')
The popular task queue package Celery uses this a lot, rather than giving you code examples here, please check out their git repository

Resources