AWS CLI - All commands return Unknown output type: [None] - aws-cli

All of my aws-cli commands returned
Unknown output type: [None]
I checked my configuration
$ aws configure
appeared normal but i was unable to edit my 'Default output format'
I ran my aws-cli command with --debug and saw
MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/awscli/clidriver.py", line 208, in main
return command_table[parsed_args.command](remaining, parsed_args)
File "/Library/Python/2.7/site-packages/awscli/clidriver.py", line 345, in __call__
return command_table[parsed_args.operation](remaining, parsed_globals)
File "/Library/Python/2.7/site-packages/awscli/clidriver.py", line 517, in __call__
call_parameters, parsed_globals)
File "/Library/Python/2.7/site-packages/awscli/clidriver.py", line 638, in invoke
self._display_response(operation_name, response, parsed_globals)
File "/Library/Python/2.7/site-packages/awscli/clidriver.py", line 657, in _display_response
formatter = get_formatter(output, parsed_globals)
File "/Library/Python/2.7/site-packages/awscli/formatter.py", line 272, in get_formatter
raise ValueError("Unknown output type: %s" % format_type)
ValueError: Unknown output type: [None]

$aws configure
press Enters
see the "Default output format [None]:"
input one of "json, text or table "(all in lower case)
after that rerun your command.

Run the command "aws configure", then check if the word "JSON" in "Default output format [JSON]:" is in upper case or lower case? If it's in upper case then running any aws command shows "Unknown ouput type : JSON".
Or alternately open the file C:\Users\<user>\.aws\config file and check the entry "output = json". If the word json is in upper case then running any aws command shows "Unknown ouput type : JSON".
Solution:
Replace the upper case JSON with lower case json.

My ~/.aws/config was somehow in a bad state, there were multiple declarations for the same setting under a single role header. Editing the file manually fixed my issue.
The info under Configuration Settings and Precedence https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html led me to the right place.

Figure out your ARN and your role and set up everything (region and clustername) as in the below command in the respective place.
aws eks update-kubeconfig --name eks-cluster-name --region aws-region --role-arn arn:aws:iam::XXXXXXXXXXXX:role/testrole
Also, read on the documentation here below.
https://aws.amazon.com/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/

Related

Why does the execution of a Python script fail on running it with a batch file with the Windows Task Scheduler which uses vir env?

I have a Python script on my Windows machine which has to be run in a virtual environment in order to satisfy package dependencies.
I have created a batch file to use it with Task Scheduler which looks like the following:
call activate vir_env
python "C:\Users\xxx\Documents\Anaconda\envs\vir_env\Scripts\script.py"
call conda deactivate
pause
set /p id="Press enter when finished"
This batch file executes successfully when I run it by manually double clicking on it. But it is failing after a period of execution when I schedule it for execution by the Task Scheduler. (I am saving this batch file on my desktop and pointing the scheduler to read it from there.)
I also see the following line on cmd while running it manually.
DevTools listening on ws://127.0.0.1:61347/devtools/browser/d86ec8f2-7af2-4a2b-89f4-6c6f7025cc02
But I get the following notification if I schedule and run it with Task Scheduler:
DevTools listening on ws://127.0.0.1:55329/devtools/browser/e8cd5010-6b41-4d35-a465-78a75e87a547
That can be seen in the error output as posted below.
DevTools listening on ws://127.0.0.1:55329/devtools/browser/e8cd5010-6b41-4d35-a465-78a75e87a547
Traceback (most recent call last):
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\Scripts\script.py", line 432, in <module>
gv.save(deps, buffer, fmt='png')
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\holoviews\util\__init_.py", line 820, in save
return renderer_obj.save(obj, filename, fmt=fmt, resources=resources,
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\holoviews\plotting\renderer.py", line 627, in save
rendered = self_or_cls(plot, fmt)
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\holoviews\plotting\renderer.py", line 201 in __call__
data = self._figure_data(plot, fmt, **kwargs)
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\holoviews\plotting\bokeh\renderer.py", line 131, in _figure_data
img = get_screenshot_as_png(plot.state, driver=state.webdriver)
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\bokeh\io\export.py", line 223, in get_screenshot_as_png
web_driver.maximize_window()
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 737, in maximize_window
self.execute(command, params)
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\xxx\Documents\Anaconda\envs\vir_env\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.comnon.exceptions.NoSuchWindowException: Message: Browsing context has been discarded
How can this be possible?
The dev tool link seems to be different in the two cases.
Could this be the potential cause of the issue?
I had the same problem but found a solution without using task scheduler
Create a file and name it eg Example.bat
Edit the file and paste this code
set Path_To_File=C:\Path\To\File
set SunDay_Time=11:34
set MonDay_Time=13:20
set TuesDay_Time=07:30
set WednesDay_Time=02:57
set ThursDay_Time=11:42
set Friday_Time=23:59
set Saturday_Time=23:59
#echo off
:loop
timeout 10
for /f %%i in ('powershell ^(get-date^).DayOfWeek') do set day=%%i
SET "HH=%TIME:~0,2%
SET "MM=%TIME:~3,2%
if %day%==Sunday set Time_Run=%SunDay_Time%
if %day%==Monday set Time_Run=%MonDay_Time%
if %day%==Tuesday set Time_Run=%TuesDay_Time%
if %day%==Wednesday set Time_Run=%WednesDay_Time%
if %day%==Thursday set Time_Run=%ThursDay_Time%
if %day%==Friday set Time_Run=%Friday_Time%
if %day%==Saturday set Time_Run=%Saturday_Time%
if "%HH%:%MM%" EQU "%Time_Run%" start "%Path_To_File%"
goto loop
Edit this section
Modify the path name of the program you want to run at the specified time, and modify the time you want to run the program through this section, and then save the file
set Path_To_File=C:\Path\To\File
set SunDay_Time=11:34
set MonDay_Time=13:20
set TuesDay_Time=07:30
set WednesDay_Time=02:57
set ThursDay_Time=11:42
set Friday_Time=23:59
set Saturday_Time=23:59
Press Windows and R and type shell:startup
Create a file and name it example.vbs
Edit the file and paste this code
Set WshShell = CreateObject("WScript.Shell")
WshShell.Run chr(34) & "C:\Path\To\File.bat" & Chr(34), 0
Set WshShell = Nothing
Replace C:\Path\To\File.bat with the path of the batch file and save the file

Is there a specific format of storing environment variables in .bash_profile?

Following is the python code I am using to send messages to slack. The following code throws an error when I try to get the api_key from environment, but it works perfectly fine when I replace webhook with the actual API key.
import requests
import json
import os
data = {
"text" : "hi there"
}
webhook = os.environ.get("SLACK_API_KEY")
requests.post(webhook, json.dumps(data))
The SLACK_API_KEY is an environment variable which I have stored in .bash_profile folder of my system. The API key has the following format:
https://hooks.slack.com/services/alpha_numeric/alpha_numeric/alpha_numeric
This is how I have stored the API key in my .bash_profile folder:
export SLACK_API_KEY="https://hooks.slack.com/services/alpha_numeric/alpha_numeric/alpha_numeric"
This is the error when I try to export the api_key from environment.
Traceback (most recent call last):
File "/Users/nikhilsawal/OneDrive/investment_portfolio/helper_functions.py", line 10, in <module>
requests.post(webhook, json.dumps(data))
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/sessions.py", line 516, in request
prep = self.prepare_request(req)
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/sessions.py", line 449, in prepare_request
p.prepare(
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/models.py", line 314, in prepare
self.prepare_url(url, params)
File "/Users/nikhilsawal/OneDrive/investment_portfolio/track_proj/lib/python3.8/site-packages/requests/models.py", line 388, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL 'None': No schema supplied. Perhaps you meant http://None?
[Finished in 0.149s]
Your env var is it stored correctly, to check your environment variable you could do :echo $SLACK_API_KEY in your shell or you could do env and grep after it.
Pay attention that ~/.bash_profile if for user-specific settings and actions (login shells only) alternatively there is .bashrc which is executed for interactive non-login shells.
I suggest to put your variable in ~/.profile and then don't forget to source ~/.profile or wherever you decide to place you env variable .
If they do not exists you can create them, I recommend this post which explains the differences.
Also you can set the var it directly from the script:
os.environ["SLACK_API_KEY"] = "value"
Did a small test with ~/.bash_profile for your case:

python script in cron not reading a CSV unless it creates the CSV itself

I have the following script. It works when I run it in command line, and it works when I run it in cron.
The variable 'apath' is the absolute path of the file.
cat=['a','a','a','a','a','b','b','b','b','b']
val=[1,2,3,4,5,6,7,8,9,10]
columns=['cat','val']
data=[cat,val]
dict={key:value for key,value in zip(columns,data)}
statedata_raw=pd.DataFrame(data=dict)
statedata_raw.to_csv(apath+'state_data.csv',index=False)
statedata_raw2=pd.read_csv(apath+'state_data.csv')
statedata_raw2.to_csv(apath+'state_data2.csv',index=False)
But when I try to run the first part manually, creating the first csv, and then run the second part through cron, the second read_csv statement fails. I checked the permissions on the state_data.csv file and they are fine. It's set to -rwxr-xr-x
To be specific: I first run this script manually through command line. It executes and creates state_data.csv. Then I check the permissions of state_csv, and they are -rwxr-xr-x
cat=['a','a','a','a','a','b','b','b','b','b']
val=[1,2,3,4,5,6,7,8,9,10]
columns=['cat','val']
data=[cat,val]
dict={key:value for key,value in zip(columns,data)}
statedata_raw=pd.DataFrame(data=dict)
statedata_raw.to_csv(apath+'state_data.csv',index=False)
and then this script via cron, which fails, and gives the error message below
statedata_raw2=pd.read_csv(apath+'state_data.csv')
statedata_raw2.to_csv(apath+'state_data2.csv',index=False)
This is the error that I get from the system
Traceback (most recent call last):
File "/users/michaelmader/wdtest.py", line 39, in <module>
statedata_raw2=pd.read_csv(apath+'state_data.csv')
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 448, in _read
parser = TextFileReader(fp_or_buf, **kwds)
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 880, in __init__
self._make_engine(self.engine)
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1114, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1891, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 374, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 678, in pandas._libs.parsers.TextReader._setup_parser_source
OSError: Initializing from file failed
To summarize
Run complete script through Terminal: state_data2.csv is created: pass
Run complete script through cron: state_data2.csv is created: pass
Run first part through Terminal, second part through cron: fail
I am on MacOS and I already gave crontab full disk access in system preferences.
I figured out the problem. The issue was the permissions that were granted to cron in MacOS. I thought I had solved it by giving \usr\bin\crontab full disk access, but I actually needed to give full disk access to usr\sbin\cron
The steps for doing this can be found here: https://blog.bejarano.io/fixing-cron-jobs-in-mojave/.
Once I made that change everything worked fine.

Unable to run Porter5: generating `.flatpsi` file instead of `.psi`

I am trying to use Porter5 to run protein secondary structure prediction on a FASTA file containing a bunch of protein sequences. I am using a Linux machine.
For starters, I decided to try using the example file that gets downloaded along with Porter5, called 2FLGA.fasta. The command I used was the one I found on the GitHub page for Porter5 (https://github.com/mircare/Porter5/)
$ python3 Porter5.py -i example/2FLGA.fasta --cpu 4
I got the following error message:
sh: 1: /home/user/ncbi-blast-2.8.1+/bin/psiblast: not found
PSI-BLAST executed in 0.01s
wc: example/2FLGA.fasta.psi: No such file or directory
awk: cannot open example/2FLGA.fasta.psi (No such file or directory)
HHblits executed in 0.01s
Traceback (most recent call last):
File "/home/user/Porter5/scripts/process-alignment.py", line 37, in <module>
sequences = lines[0] = len(lines) - 1
IndexError: list assignment index out of range
Traceback (most recent call last):
File "Porter5.py", line 80, in <module>
flatpsi_ann = open(filename+".flatpsi.ann", "r").readlines()
FileNotFoundError: [Errno 2] No such file or directory: 'example/2FLGA.fasta.flatpsi.ann'
After PSI-BLAST, the Porter5 script is expecting an output file called 2FLGA.fasta.psi. I checked the example directory and it contains an output file called 2FLGA.fasta.flatpsi.
I'm not sure what to do here. I don't want to try modifying any of the Porter5 scripts to look for .flatpsi files instead of .psi files because I am a beginner at programming, and I don't want all hell to break loose by tampering with the code.
Could someone please help me with this? Any help is appreciated.
(There are a bunch of errors to negotiate with later but I'll see about those after dealing with the first one.)
I am the author of Porter5 and I generally recommend to open an issue straight on GitHub since I don't get any notification otherwise.
It looks like the path of psiblast is wrong (first line of your error message). You can check that with the following command:
$ ls /home/user/ncbi-blast-2.8.1+/bin/psiblast
Also, the path for the executable or the database of HHblits is wrong, or maybe both. You can check that as follow (within Porter5/):
$ cat scripts/config.ini
You can either edit scripts/config.ini or run the following command until Porter5 runs succesfully:
$ python3 Porter5.py -i example/2FLGA.fasta --cpu 4 --setup
(The .flatpsi is an intermediate file, it doesn't contain a valid representation if HHblits doesn't run succesfully)

Why am I getting a runtime error / key error "no device found for" empty device address?

Why am I getting the following error message when executing uhd_fft GNU Radio script:
/opt/gnuradio-3.7.1git/bin$ uhd_fft
linux; GNU C++ version 4.6.3; Boost_104601; UHD_003.005.003-123-g1c391767
Traceback (most recent call last):
File "/opt/gnuradio-3.7.1git/bin/uhd_fft", line 341, in <module>
main ()
File "/opt/gnuradio-3.7.1git/bin/uhd_fft", line 337, in main
app = stdgui2.stdapp(app_top_block, "UHD FFT", nstatus=1)
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/wxgui/stdgui2.py", line 38, in __init__
wx.App.__init__ (self, redirect=False)
File "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7981, in __init__
self._BootstrapApp()
File "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7555, in _BootstrapApp
return _core_.PyApp__BootstrapApp(*args, **kwargs)
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/wxgui/stdgui2.py", line 42, in OnInit
self._max_noutput_items)
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/wxgui/stdgui2.py", line 64, in __init__
self.panel = stdpanel (self, self, top_block_maker, max_nouts)
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/wxgui/stdgui2.py", line 86, in __init__
self.top_block = top_block_maker (frame, self, vbox, sys.argv)
File "/opt/gnuradio-3.7.1git/bin/uhd_fft", line 91, in __init__
otw_format=options.wire_format, args=options.stream_args))
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/uhd/__init__.py", line 121, in constructor_interceptor
return old_constructor(*args)
File "/opt/gnuradio-3.7.1git/lib/python2.7/dist-packages/gnuradio/uhd/uhd_swig.py", line 1700, in make
return _uhd_swig.usrp_source_make(*args)
RuntimeError: LookupError: KeyError: No devices found for ----->
Empty Device Address
I'm using BladeRF hardware and followed these instructions.
I have gone through the recommendations listed here but UHD_FFT still can't seem to find the BladeRF even though
ls -lrt /dev | grep blade
crw------- 1 root root 180, 0 Aug 11 14:04 bladerf0
Why would my device not be found by UHD_FFT even though linux is aware of its existence ?
It looks like your BladeRF is only accessible by the root user. To fix this make a udev rule file (I know the write up you followed earlier had you do something similar, but bear with me). This will allow your regular user account to access it. You can start in the shell by typing:
$ sudo nano /etc/udev/rules.d/15-bladerf
This should make a new file and open the nano editor. Here you will place the following:
SUBSYSTEM=="usb", SYSFS{idVendor}=="1d50", SYSFS{idProduct}=="6066", MODE="0666"
Afterwards, reset the udev rules service by executing:
$ sudo /etc/init.d/udev restart
NOTE: This commands should work on any Debian-based OS (Debian, Ubuntu, Linux Mint,...)
CREDIT: The udev rules were found here http://pastebin.com/Mgb90L1x

Resources