I have created a docker image to run some python code on a Lambda that essentially looks like this
FROM public.ecr.aws/lambda/python as build
COPY . .
RUN pip3 install -r requirements.txt
RUN chmod +x run.sh
ENTRYPOINT [ "./run.sh" ]
And it's calling run.sh which looks like this:
#!/bin/bash
# Pull the code from S3
aws s3 cp s3://$S3_REPO/ /tmp/ --recursive --exclude "*" --include "*.py"
cd /tmp
# Run the main.py file Run the main_handler function
python3 -c "from main import run; run()"
Which in turn is calling a python function that looks something like this:
def run():
BUCKET_NAME = "this-bucket"
file_name = f"{datetime.datetime.utcnow()}.json"
output = somefunction()
# Create an S3 client
s3 = boto3.client('s3')
# Convert the output dictionary to a JSON string
output_json = json.dumps(output)
# Check if the bucket exists
try:
s3.head_bucket(Bucket=BUCKET_NAME)
except s3.exceptions.NoSuchBucket:
# Create the bucket if it does not exist
s3.create_bucket(Bucket=BUCKET_NAME)
# Upload the object to the bucket
s3.put_object(Bucket=BUCKET_NAME, Key=file_name, Body=output_json)
return 0
Now that all seemed pretty straight forward, but I have two problems I have been wrestling with (and trying to get assistance with via ChatGPT).
When the lambda runs it always gives this error:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: c964c24b-db15-4b37-87fd-27406041f85a Error: Runtime exited without providing a reason"
}
But! the code has executed correctly and my expected data is in the S3 bucket.... but twice, about 10 - 15 seconds apart.
The logs indicate that the python function has been run twice! every time. I can confirm that asynchronous invocation has been set to 0 retries. I have been struggling with this for a few days now, I'm very keen to be pointed in the right direction as I am wildly confused and not keen on doubling my s3 usage unnecessarily.
EDIT
Adding this to the bash script confirms that Python is running and exiting successfully. It appears to be an issue with exiting the bash script itself?
if python3 -c "from main import run; run()"
then echo "Success!"
fi
Related
I,am building a script to update files on Bitbucket using the rest api.
My problems are:
Running the command using subprocess lib and running the command directly on the command line gives two different results.
If I run the command using the command line, when I inspect my commits on the Bit bucket app I can see a Commit message and a Issue.
If I run the command using the help of the subprocess lib I don't have a commit message and a Issue in the end. The commit message sets itself by default to "edited by bitbucket" and the issue is null.
This is the command:
curl -X PUT -u user:pass -F content=#conanfile_3_f62hu.py -F 'message= test 4' -F branch=develop -F sourceCommitId={} bitbucket_URL".format(latest_commit)
The other problem is that I need to pass a file to the content in order to update it.
If I pass it like above it works. The problem is that I am generating the file content as raw string and creating a temporary file with that content.
And when I pass the file as a variable, it does not get the content of the file.
My code:
content = b'some content'
current_dir = os.getcwd()
temp_file=tempfile.NamedTemporaryFile(suffix=".py",prefix="conanfile", dir=current_dir)
temp_file.name = temp_file.name.split("\\")
temp_file.name = [x for x in temp_file.name if x.startswith("conanfile")][0]
temp_file.name = "#" + temp_file.name
temp_file.write(content)
temp_file.seek(0)
update_file_url = "curl -X PUT -u user:pass -F content={} -F 'message=test 4' -F branch=develop -F sourceCommitId={} bitbucket_url".format(temp_file.name, latest_commit)
subprocess.run(update_file_url)
Basically I'am passing the file like before, just passing the name to the content, but it does not work.
If I print the command everything looks good, so I don't know why the commit message does not get set and the file content as well.
Updated:
I was able to pass the file, My mistake was that I was not passing it like temp_file.name.
But I could not solve the problem of the message.
What I found is that the message will only take the first word. If there is a space and one more word after, it will ignore it.
The space is causing some problem.
I found the solution, if someone found himself with this problem we need to use a \ before the message= .
Example: '-F message=\" Updated with latest dependencies"'
I want to use shell script function like below using gnu parallel.
This is part of my code.
#!/bin/bash
# Figure out script absolute path
pushd `dirname $0` > /dev/null
BIN_DIR=`pwd`
popd > /dev/null
ROOT_DIR=`dirname $BIN_DIR`
export ROOT_DIR
CLASS_NAME=$3
export CLASS_NAME
invoke_driver() {
$ROOT_DIR/DRIVER_DIR $CLASS_NAME $1
}
export -f invoke_driver
parallel invoke_driver :::: 'method_list.txt'
In 'method_list.txt' file, name of method is listed line by line like below.
method1
method2
...
Driver file only gets two arguments as input.
Driver in this code is fuzzing tool which runs endlessly.
So I want to give function each method as input and run this tool in parallel.
For example, if there are 3 methods in a txt file, I would like to write code that fuzzes each method in parallel.
But when I run this code, error occurs.
So please let me know how to solve this problem.
As several other users who have posted to StackOverflow, I ran into problems with file provisioners, and the Terraform documentation says we should not rely on them.
What's the best way to work around file provisioners - specifically for local config files and scripts?
One solution, which works very well and does not require a direct connection to the instance, is to use the userdata as a hook to "install" the files from the base64 version of the file(s).
We can actually embed the files as base64 strings in the userdata initialization scripts. This works for both Windows and Linux instances in AWS, and is compatible also with having a userdata script run on startup.
Solution Description:
During terraform plan, encode whatever local files you need as base64 strings using terraform functions base64encode(file("path/to/file")).
(Optional) Save a marker file (_INIT_STARTED_) at the start of userdata execution; this file will have the creation timestamp of when the userdata execution started.
Before running the actual userdata script, write the base64 strings to text files. (The actual command varies between windows and linux, see examples below.)
Run the userdata script itself (userdata_win.bat or userdata_lin.sh)
(Optional) Finally, save a second marker file (_INIT_COMPLETE_) which will have the creation timestamp of when the userdata script completed. (The absence of this file is also helpful to detect script failures and/or still-running scripts after logging into the instance.)
For AWS Linux instances:
data "template_file" "userdata_lin" {
template = <<EOF
#!/bin/bash
mkdir -p /home/ubuntu/setup-scripts
cd /home/ubuntu/setup-scripts
touch _INIT_STARTED_
echo ${base64encode(file("${path.module}/userdata_lin.sh"))} | base64 --decode > userdata.sh
echo ${base64encode(file("${path.module}/config.json"))} | base64 --decode > config.json
${file("${path.module}/userdata_lin.sh")}
sudo chmod 777 *
touch _INIT_COMPLETE_
EOF
}
# ...
resource "aws_instance" "my_linux_instance" {
# ...
user_data = data.template_file.userdata_lin.rendered
}
For AWS Windows instances:
data "template_file" "userdata_win" {
template = <<EOF
<script>
mkdir C:\Users\Administrator\setup-scripts
cd C:\Users\Administrator\setup-scripts
echo "" > _INIT_STARTED_
echo ${base64encode(file("${path.module}/userdata_win.bat"))} > tmp1.b64 && certutil -decode tmp1.b64 userdata.bat
echo ${base64encode(file("${path.module}/config.json"))} > tmp2.b64 && certutil -decode tmp2.b64 config.json
${file("${path.module}/userdata_win.bat")}
echo "" > _INIT_COMPLETE_
</script>
<persist>false</persist>
EOF
}
# ...
resource "aws_instance" "my_windows_instance" {
# ...
user_data = data.template_file.userdata_win.rendered
}
I am having a problem that I have been spending quite a lot of time on. I have a few lines of code that I run through a bat file, that I would like to run from a Python script instead. This is so that I can feed in variables and create the cmd string throug Python code.
The code is used to run Tableau bridge client to sync a data extract. This is the code and it works when I run it from a BAT file:
#echo off
cd C:\Program Files\Tableau\Tableau 10.3\bin
tableau refreshextract --server https://dub01.online.tableau.com --username "username" --password "password" --site "sitename" --project "project name" --datasource "datasource name" --source-username sql_backend_username --source-password sql_backend_password
I understand that the first line sets the working directory to the folder where Tableau is installed. And the second line does the refreshing.
I have been trying to make this work in Python by just doing something like this:
import os
os.chdir("C:\\Program Files\\Tableau\\Tableau 10.3\\bin")
os.system("tableau -cmd_string_from_above
but all that this does is run the Tableau client, and not the process running the extract. I have been looking at some examples using popen but all of that code is so complex that I dont understand how it actually works.
Hope someone can help me out.
If it works as a bat file, try running that from python.
os.system("start mybatch.bat")
I would recommend you look at the module subprocess
as and example:
subprocess.call(["C:\\Program Files\\Tableau\\Tableau 10.3\\bintableau", "refreshextract", "etc. etc."])
a simpler example:
from subprocess import call
call(["ping", "localhost"])
Ok so building on the code/methods other sugested here I did manage to get it working. It might be kind if a "hacky" solution, but it looks like it is working ok. Here is the example
import json
import os
import subprocess
def refresh_extract(project_name, datasource_name):
#Json file containing most of the variables used
params = json.load(open('all_paths.json'))
server = params['tableau']['server']
username = params['tableau']['username']
password = params['tableau']['password']
site = params['tableau']['site']
source_username = params['tableau']['source-username']
source_password = params['tableau']['source-password']
filepath = "Path to bat file that is called"
executable = os.path.join(filepath, 'tableau_refresh.bat')
#Set the working dir to dir where tableau files used for the call is stored, this is the "hacky" part, needed for the bat to find tableau
os.chdir("C:\\Program Files\\Tableau\\Tableau 10.3\\bin")
p = subprocess.Popen([executable, '%s' %server, '%s' %username, '%s' %password, '%s' %site, '%s' %project_name, '%s' %datasource_name, '%s' %source_username, '%s' %source_password])
return 0
if __name__ == "__main__":
project_name = "project_name"
datasource_name = "data_source_name"
run = refresh_extract(project_name, datasource_name)
Thanks a lot to everyone that helped me figure it out!
This is the code for the bat file just in case it could help someone else
#echo off
#this seems not to be working, therefore I set the working did using
#os.chdir
cd C:\Program Files\Tableau\Tableau 10.3\bin
tableau refreshextract --server %1 --username %2 --password %3 --site %4 --
project %5 --datasource %6 --source-username %7 --source-password %8
PAUSE
I have created a shell script and inside of it is a simple statement unzip -o $1 and on running through terminal and passing a .zip file as parameter it works fine and takes 5 second to create unzipped folder.Now I am trying to do the same thing in scala and my code is as below :
object ZipExt extends App {
val process = Runtime.getRuntime.exec(Array[String]("/home/administrator/test.sh", "/home/administrator/MyZipFile_0.8.6.3.zip"))
process.waitFor
println("done")
}
Now whenever I am trying to execute ZipExt it gets stuck in process.waitFor forever and print statement is not reached.I have tried using this code both locally and on server also. I tried other possibilites also like creating local variable inside shellscript, including exit statements inside shell script, trying to unzip other .zip other than mines, even sometimes print statement is executing but no unzipped file is created there. So I am pretty sure there is something wrong about executing unzip command programmatically to unzip a file or there is some other way around to unzip a zipped file programtically.I have been stuck with this problem for like 2 days, so somebody plz help..
The information you have given us appears to be insufficient to reproduce the problem:
% mkdir 34088099
% cd 34088099
% mkdir junk
% touch junk/a junk/b junk/c
% zip -r junk.zip junk
updating: junk/ (stored 0%)
adding: junk/a (stored 0%)
adding: junk/b (stored 0%)
adding: junk/c (stored 0%)
% rm -r junk
% echo 'unzip -o $1' > test.sh
% chmod +x test.sh
% scala
Welcome to Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_66).
Type in expressions to have them evaluated.
Type :help for more information.
scala> val process = Runtime.getRuntime.exec(Array[String]("./test.sh", "junk.zip"))
process: Process = java.lang.UNIXProcess#35432107
scala> process.waitFor
res0: Int = 0
scala> :quit
% ls junk
a b c
I would suggest trying this same reproduction on your own machine. If it succeeds for you too, then start systematically reducing the differences between the succeeding case and the failing case, a step at a time. This will help narrow down what the possible causes are.