Loop through api call in bash script - linux

I have an api that returns 50 users.
Is there a way of looping through the api call
expand=users%5B1%3A50%5D
The 1 after B is the starting number and it will pull until 50 which is the number after A
I have a script that will store the responses to a text file but how can I loop through this adding increments of 50?
For example. having a variable in place of the numbers
expand=users%5B$num1%3A$num2%5D

expand=users%5B{1..50}%3A50%5D
The {1..5} will expand as 1 2 3 4 5
for example
$ echo abc{1..5}def
abc1def abc2def abc3def abc4def abc5def
now all you need is to loop over the expand
for api in $expand
do
#do something
done

Related

Is there a bash function for determining number of variables from a read provided from end user

I am currently working on a small command line interface tool that someone not very familiar with bash could run from their computer. I have changed content for confidentiality, however functionality has remained the same.
The user would be given a prompt
the user would then respond with their answer(s)
From this, I would be given two bits of information:
1.their responses now as individual variables
2.the number of variables that I have now been given: this value is now a variable as well
my current script is as follows
echo List your favorite car manufacturers
read $car1 $car2 $car3 #end user can list as many as they wish
for n in {1..$numberofmanufacturers} #finding the number of
variables/manufactures is my second question
do
echo car$n
done
I am wanting to allow for the user to enter as many car manufacturers as they please (1=<n), however I need each manufacturer to be a different variable. I also need to be able to automate the count of the number of manufacturers and have that value be its own new variable.
Would I be better suited for having the end user create a .txt file in which they list (vertically) their manufactures, thus forcing me to use wc -l to determine the number of manufacturers?
I appreciate any help in advance.
As I said in the comment, whenever you want to use multiple dynamically created variables, you should check if there isn't a better data structure for your use case; and in almost all cases there will be. Here is the implementation using bash arrays. It prints out the contents of the input array in three different ways.
echo List your favorite car manufacturers
# read in an array, split on spaces
read -a cars
echo Looping over array values
for car in "${cars[#]}"
do
echo $car
done
echo Looping over array indices
for i in ${!cars[#]}
do
echo ${cars[$i]}
done
echo Looping from 0 to length-1
let numcars=${#cars[#]}
for i in $(seq 0 $((numcars-1)))
do
echo ${cars[$i]}
done

Bash Script Efficient For Loop (Different Source File)

First of all i'm a beginner in bash scripting. I usually code in Java but this certain task requires me to create some bash scripts in Linux. FYI i've already made a working script but I think its not efficient enough because of the large files I'm dealing with.
The problem is simple I have 2 logs that I need to compare and make some correction on one of the logs... ill call it logA and logB. This 2 logs contains different format here is an example:
01/04/2015 06:48:59|9691427842139609|601113090635|PR|30.00|8.3|1.7| <- log A
17978712 2015-04-01 06:48:44 601113090635 SUCCESS DealerERecharge.<-log B
17978714 2015-04-01 06:48:49 601113090635 SUCCESS DealerERecharge.<-log B
As you can see there is a gap in time stamp. The actual logs that will match with log A is the one with the ID 17978714 because it is the closest time from it. The highest time gap I've seen is 1 minute. I cant use the RANGE logic because if there are more than one line on log B that is within the 1 minute range then all of the line will show in my regenerated log.
The script I made contains a for loop which iterate the timestamp of log A until it hits something in log B (The first one it hits is the closest)
Inside the for loop I have this line of code which makes the loop slow.
LINEOUTPUT=$(less $File2 | grep "Validation 1" | grep "Validation 2" | grep "Timestamp From Log A")
I've read some sample using SED but the problem is I have 2 more validation to consider before matching it with the time stamp.
The validation works as a filter to narrow down the exact match for log A and B.
Additional Info: I tried doing some benchmark test for the script I made by performing some loop. One thing I've noticed is that even though I only use 1 pipe for that script the loop tick is still slow.

Simulating rolling dice on a dynamic webpage using Bash

I'm currently working on a Bash script that simulates rolling a number of 6-sided dice. This is all taking place within a virtual machine running Debian that's acting as a server. Essentially, my webpage simulates rolling the dice by using the query string to determing the number of dice to be rolled.
For instance, if my URL is http://127.0.0.1/cgi-bin/rolldice.sh?6, I want the webpage to say "You rolled 6 dice" and then, on the next line, print six numbers between 1 and 6 inclusive (that are of course "randomly" generated).
Currently, printing out the "You rolled x dice" header is working fine. However, I'm having trouble with the next part. I'm very new to Bash, so possibly the syntax or something similar is wrong with my loop. Here it is:
for i in {1..$QUERY_STRING }; do
dieRoll = $(( $RANDOM % 6 + 1))
echo $dieRoll
done
Can anyone help me figure out where I'm going wrong? I'll be happy to post the rest of rolldice.sh if needed.
Since .. requires its arguments to be literals, you have to use eval to substitute the variable:
for i in $(eval "echo {1..$QUERY_STRING}"); do
Or if you have the seq command, you can do:
for i in $(seq 1 "$QUERY_STRING")
I recommend the latter -- using eval with input from the user is very dangerous.

Filename manipulation in cygwin

I am running cygwin on Windows 7. I am using a signal processing tool and basically performing alignments. I had about 1200 input files. Each file is of the format given below.
input_file_ format = "AC_XXXXXX.abc"
The first step required building some kind of indexes for all the input files, this was done with the tool's build-index command and now each file had 6 indexes associated with it. Therefore now I have about 1200*6 = 7200 index files. The indexes are of the form given below.
indexes_format = "AC_XXXXXX.abc.1",
"AC_XXXXXX.abc.2",
"AC_XXXXXX.abc.3",
"AC_XXXXXX.abc.4",
"AC_XXXXXX.abc.rev.1",
"AC_XXXXXX.abc.rev.1"
Now, I need to use these indexes to perform the alignment. All the 6 indexes of each file are called together and the final operation is done as follows.
signal-processing-tool ..\path-to-indexes\AC_XXXXXX.abc ..\Query file
Where AC_XXXXXX.abc is the index associated with that particular index file. All 6 index files are called with **AC_XXXXXX.abc*.
My problem is that I need to use only the first 14 characters of the index file names for the final operation.
When I use the code below, the alignment is not executed.
for file in indexes/*; do ./tool $file|cut -b1-14 Project/query_file; done
I'd appreciate help with this!
First of all, keep in mind that $file will always start with "indexes/", so trimming first 14 characters would always include that folder name in the beginning.
To use first 14 characters in a variable, use ${file:0:14}, where 0 is the starting string index, and 14 is the length of the desired substring.
Alternatively, if you want to use cut, you need to run it in a subshell: for file in indexes/*; do ./tool $(echo $file|cut -c 1-14) Project/query_file; done I changed the arg for cut to -c for characters instead of bytes

writing data from a program to a file

I'm using linux. Let's say I have a program named add. The program takes two numbers.
so if I type in
add 1 2
the answer is 3 //obvious
what command will make this write out to a file named add.data
I'm kind of a linux n00b. I was reading about piping. Thanks.
Piping means sending the output of a program as input to a second, which must be able to read data from the standard input, e.g.
add 1 2 | echo
What you are asking about here is output redirection: you should use
add 1 2 > add.data
to create a new file with your output (if existing will be overwritten), and
add 1 2 >> add.data
to create a new one or append to an existing.
add 2 3 > something.txt
This will redirect output into a file, recreates the file every time
add 1 2 > add.data
This will append to the end of the file
add 1 2 >> add.data

Resources