The goal: produce a path from an integer.
I need to split strings in fixed length (2 characters in this case), and then glue the pieces with a separator. Example : 123456 => 12/34/56, 12345 => 12/34/5.
I found a solution with sed:
sed 's/\(..\)/\1\//g'
but I'm not sure it's really quick, since I'm really not searching for any analysis of the string content (which will always be an integer, if it's any importance), but really to split it in length 2 (or 1 if the original length is odd).
bash expansion can do substring
var=123456
echo "${var:0:2}" # 2 first char
echo "${var:2:2}" # next two
echo "${var:4:2}" # etc.
joinning manually with /
echo "${var:0:2}/${var:2:2}/${var:4:2}"
Use parameter substitution. ${var:position:length} extracts substrings, ${#var} returns length of the value, ${var%final} removes "final" from the end of the value. Run in in a loop for strings of unknown length:
#!/bin/bash
for s in 123456 1234567 ; do
o=""
for (( pos=0 ; pos<${#s} ; pos+=2 )) ; do
o+=${s:pos:2}/
done
o=${o%/}
echo "$o"
done
TL;DR
sed is enough fast.
If we are talking about speed, let's check.
I think sed is the shorted solution, but as example I'll take #choroba's shell script:
$ wc -l hugefile
10877493 hugefile
Sed:
sed 's/\(..\)/\1\//g' hugefile
Output:
real 0m25.432s
user 0m8.731s
sys 0m10.123s
Script:
#!/bin/bash
while IFS='' read -r s ; do
o=""
for (( pos=0 ; pos<${#s} ; pos+=2 )) ; do
o+=${s:pos:2}/
done
o=${o%/}
echo "$o"
done < hugefile
Working really long time, I've interrupted it at:
real 1m19.480s
user 1m14.795s
sys 0m4.683s
So on my PC Intel(R) Core(TM) i5-7500 CPU # 3.40GHz, MemTotal: 16324532 kB, sed making around 426568 (close for half a million) string modifications per second. Seems like fast enough
You can split a string into elements using the fold command, read the elements into an array with readarray and process substitution, and then insert the field separator using IFS:
$ var=123456
$ readarray -t arr < <(fold -w2 <<< "$var")
$ (IFS=/; echo "${arr[*]}")
12/34/56
I put the last command in a subshell so the change to IFS is not persistent.
Notice that the [*] syntax is required here, or IFS won't be used as the output separator, i.e., the usually preferred [#] wouldn't work.
readarray and its synonym mapfile require Bash 4.0 or newer.
This works with an odd number of elements as well:
$ var=12345
$ readarray -t arr < <(fold -w2 <<< "$var")
$ (IFS=/; echo "${arr[*]}")
12/34/5
Related
I have a string variable in my script, made up of the 9 permission characters from ls -l
eg:
rwxr-xr--
I want to manipulate it so that it displays like this:
r w x r - x r - -
IE every three characters is tab separated and all others are separated by a space. The closest I've come is using a printf
printf "%c %c %c\t%c %c %c\t%c %c %c\t/\n" "$output"{1..9}
This only prints the first character but formatted correctly
I'm sure there's a way to do it using "sed" that I can't think of
Any advice?
Using the Posix-specified utilities fold and paste, split the string into individual characters, and then interleave a series of delimiters:
fold -w1 <<<"$str" | paste -sd' \t'
$ sed -r 's/(.)(.)(.)/\1 \2 \3\t/g' <<< "$output"
r w x r - x r - -
Sadly, this leaves a trailing tab in the output. If you don't want that, use:
$ sed -r 's/(.)(.)(.)/\1 \2 \3\t/g; s/\t$//' <<< "$str"
r w x r - x r - -
Why do u need to parse them? U can access to every element of string by copy needed element. It's a very easy and without any utility, for example:
DATA="rwxr-xr--"
while [ $i -lt ${#DATA} ]; do
echo ${DATA:$i:1}
i=$(( i+1 ))
done
With awk:
$ echo "rwxr-xr--" | awk '{gsub(/./,"& ");gsub(/. . . /,"&\t")}1'
r w x r - x r - -
> echo "rwxr-xr--" | sed 's/\(.\{3,3\}\)/\1\t/g;s/\([^\t]\)/\1 /g;s/\s*$//g'
r w x r - x r - -
( Evidently I didn't put much thought into my sed command. John Kugelman's version is obviously much clearer and more concise. )
Edit: I wholeheartedly agree with triplee's comment though. Don't waste your time trying to parse ls output. I did that for a long time before I figured out you can get exactly what you want (and only what you want) much easier by using stat. For example:
> stat -c %a foo.bar # Equivalent to stat --format %a
0754
The -c %a tells stat to output the access rights of the specified file, in octal. And that's all it prints out, thus eliminating the need to do wacky stuff like ls foo.bar | awk '{print $1}', etc.
So for instance you could do stuff like:
GROUP_READ_PERMS=040
perms=$(stat -c %a foo.bar)
if (( (perms & GROUP_READ_PERMS) != 0 )); then
... # Do some stuff
fi
Sure as heck beats parsing strings like "rwxr-xr--"
sed 's/.../& /2g;s/./& /g' YourFile
in 2 simple step
A version which includes a pure bash version for short strings, and sed for longer strings, and preserves newlines (adding a space after them too)
if [ "${OS-}" = "Windows_NT" ]; then
threshold=1000
else
threshold=100
fi
function escape()
{
local out=''
local -i i=0
local str="${1}"
if [ "${#str}" -gt "${threshold}" ]; then
# Faster after sed is started
sed '# Read all lines into one buffer
:combine
$bdone
N
bcombine
:done
s/./& /g' <<< "${str}"
else
# Slower, but no process to load, so faster for short strings. On windows
# this can be a big deal
while (( i < ${#str} )); do
out+="${str:$i:1} "
i+=1
done
echo "$out"
fi
}
Explanation of sed. "If this is the last line, jump to :done, else append Next into buffer and jump to :combine. After :done is a simple sed replacement expression. The entire string (newlines and all) are in one buffer so that the replace works on newlines too (which are lost in some of the awk -F examples)
Plus this is Linux, Mac, and Git for Windows compatible.
Setting awk -F '' allows each character to be bounded, then you'll want to loop through and print each field.
Example:
ls -l | sed -n 2p | awk -F '' '{for(i=1;i<=NF;i++){printf " %s ",$i;}}'; echo ""
The part seems like the answer to your question:
awk -F '' '{for(i=1;i<=NF;i++){printf " %s ",$i;}}'
I realize, this doesn't provide the trinary grouping you wanted though. hmmm...
I know how to replace a certain substring of a given string:
foo=abcABC
echo ${foo/abc/xyz} # xyzABC
Is it also possible to replace the first k characters by k times a given character?
Update: Example:
foobar, replace first k = 3 characters by Z yields ZZZbar.
Based on Change string char at index X. Given the string $foo, to change the first k characters by a string $pattern, this can make it:
for ((i=0; i < $k; i++))
do
foo="${foo:0:$i}$pattern${foo:$((i+1))}"
done
Test
$ a="hellomynameisyou"
$ k=5
$ pattern="x"
$ for ((i=0; i < $k; i++)); do a="${a:0:$i}$pattern${a:$((i+1))}"; echo $a; done
xellomynameisyou
xxllomynameisyou
xxxlomynameisyou
xxxxomynameisyou
xxxxxmynameisyou
For your specific example
$ pattern="Z"
$ k=3
$ a="foobar"
$ for ((i=0; i < $k; i++)); do a="${a:0:$i}$pattern${a:$((i+1))}"; echo $a; done
Zxxbar
ZZxbar
ZZZbar
$ echo $a
ZZZbar
You can also try:
matStr=abc
repChar=y
echo "${foo/$matStr/$(seq -s $repChar $((${#matStr}+1)) | tr -d '[0-9]')}"
This is not applicable when repChar is a digit.
This would be fairly simple in Perl. I was looking for something similar in pure Unix utilities and BASH, but could think of any thing. The closest I found is tr.
This written on Linux, so I use sed -r. If this is on Mac, It should be sed -E. In fact, you might even get away without using either the -E or -r flag i you use backslashes before the parentheses.
What I do is produce two strings with sed. The first finds the first length characters and tosses out the rest of the string. The second sed tosses out the first length characters and keeps the string. I can then use tr to replace all the characters with my replacement character, then concatenate the two strings together.
string="1234567890"
length="4"
replace="z"
prefix=$(sed -r -e "s/^(.{1,$length}).*/\1/" <<<"$string" | tr "[:alnum:]" "$replace")
postfix=$(sed -r -e "s/^.{1,$length}//" <<<"$string")
string="${prefix}${postfix}"
echo "$string" #Will echo "zzzz567890"
This is very easy!
Variables:
str='helloworld'
k=3
char='.'
And the most important part:
Using Perl:
echo "$(perl -E "say '$char' x $k")${str:$k}"
Using Python:
echo "$(python -c "print '$char' * $k")${str:$k}"
Using printf and tr:
echo "$(printf "%${k}s" | tr ' ' "$char")${str:$k}"
Pure Bash:
for ((i = 0; i < $k; i++)); do echo -n "$char"; done
echo "${str:$k}"
Choose your weapon! I'd choose pure bash solution.
I am using a bash script and I am trying to split a string with urls inside for example:
str=firsturl.com/123416 secondurl.com/634214
So these URLs are separated by spaces, I already used the IFS command to split the string and it is working great, I can iterate trough the two URLs with:
for url in $str; do
#some stuff
done
But my problem is that I need to get how many items this splitting has, so for the str example it should return 2, but using this:
${#str[#]}
return the length of the string (40 for the current example), I mean the number of characters, when I need to get 2.
Also iterating with a counter won't work, because I need the number of elements before iterating the array.
Any suggestions?
Split the string up into an array and use that instead:
str="firsturl.com/123416 secondurl.com/634214"
array=( $str )
echo "Number of elements: ${#array[#]}"
for item in "${array[#]}"
do
echo "$item"
done
You should never have a space separated list of strings though. If you're getting them line by line from some other command, you can use a while read loop:
while IFS='' read -r url
do
array+=( "$url" )
done
For properly encoded URLs, this probably won't make much of a difference, but in general, this will prevent glob expansion and some whitespace issues, and it's the canonical format that other commands (like wget -i) works with.
You should use something like this
declare -a a=( $str )
n=${#a[*]} # number of elements
Several ways:
$ str="firsturl.com/123416 secondurl.com/634214"
bash array:
$ while read -a ary; do echo ${#ary[#]}; done <<< "$str"
2
awk:
$ awk '{print NF}' <<< "$str"
2
*nix utlity:
$ printf "%s\n" $(printf "$str" | wc -w)
2
bash without array:
$ set -- $str
$ echo ${##}
2
If you create a function that will echo $* then that should provide the number of items to split.
count_params () { echo $#; }
Then passing $str to this function will give you the result
str="firsturl.com/123416 secondurl.com/634214"
count_params $str
I have a file with a list of address it looks like this (ADDRESS_FILE)
0xf012134
0xf932193
.
.
0fx12923a
I have another file with a list of numbers it looks like this (NUMBERS_FILE)
20
40
.
.
12
I want to cut the first 20 lines from ADDRESS_FILE and put that into a new file
then cut the next 40 lines from ADDRESS_FILE so on ...
I know that a series of sed commands like the one given below does the job
sed -n 1,20p ADDRESSS_FILE > temp_file_1
sed -n 20,60p ADDRESSS_FILE > temp_file_2
.
.
sed -n somenumber,endofilep. ADDRESS_FILE > temp_file_n
But I want to does this automatically using shell scripting which will change the numbers of lines to cut on each sed execution.
How to do this ???
Also on a general note, which are the text processing commands in linux which are very useful in such cases?
Assuming your line numbers are in a file called lines, sorted etc., try:
#!/bin/sh
j=0
count=1
while read -r i; do
sed -n $j,$i > filename.$count # etc... details of sed/redirection elided
j=$i
count=$(($count+1))
done < lines
Note. The above doesn't assume a consistent number of lines to split on for each iteration.
Since you've additionally asked for a general utility, try split. However this splits on a consistent number of lines, and is perhaps of limited use here.
Here's an alternative that reads directly from the NUMBERS_FILE:
n=0; i=1
while read; do
sed -n ${i},+$(( REPLY - 1 ))p ADDRESS_FILE > temp_file_$(( n++ ))
(( i += REPLY ))
done < NUMBERS_FILE
size=$(wc -l ADDRESSS_FILE)
i=1
n=1
while [ $n -lt $size ]
do
sed -n $n,$((n+19))p ADDRESSS_FILE > temp_file_$i
i=$((i+1))
n=$((n+20))
done
or just
split -l20 ADDRESSS_FILE temp_file_
(thanks Brian Agnew for the idea).
An ugly solution which works with a single sed invocation, can probably be made less horrible.
This generates a tiny sed script to split the file
#!/bin/bash
sum=0
count=0
sed -n -f <(while read -r n ; do
echo $((sum+1),$((sum += n)) "w temp_file_$((count++))" ;
done < NUMBERS_FILE) ADDRESS_FILE
I have a string in a Bash shell script that I want to split into an array of characters, not based on a delimiter but just one character per array index. How can I do this? Ideally it would not use any external programs. Let me rephrase that. My goal is portability, so things like sed that are likely to be on any POSIX compatible system are fine.
Try
echo "abcdefg" | fold -w1
Edit: Added a more elegant solution suggested in comments.
echo "abcdefg" | grep -o .
You can access each letter individually already without an array conversion:
$ foo="bar"
$ echo ${foo:0:1}
b
$ echo ${foo:1:1}
a
$ echo ${foo:2:1}
r
If that's not enough, you could use something like this:
$ bar=($(echo $foo|sed 's/\(.\)/\1 /g'))
$ echo ${bar[1]}
a
If you can't even use sed or something like that, you can use the first technique above combined with a while loop using the original string's length (${#foo}) to build the array.
Warning: the code below does not work if the string contains whitespace. I think Vaughn Cato's answer has a better chance at surviving with special chars.
thing=($(i=0; while [ $i -lt ${#foo} ] ; do echo ${foo:$i:1} ; i=$((i+1)) ; done))
As an alternative to iterating over 0 .. ${#string}-1 with a for/while loop, there are two other ways I can think of to do this with only bash: using =~ and using printf. (There's a third possibility using eval and a {..} sequence expression, but this lacks clarity.)
With the correct environment and NLS enabled in bash these will work with non-ASCII as hoped, removing potential sources of failure with older system tools such as sed, if that's a concern. These will work from bash-3.0 (released 2005).
Using =~ and regular expressions, converting a string to an array in a single expression:
string="wonkabars"
[[ "$string" =~ ${string//?/(.)} ]] # splits into array
printf "%s\n" "${BASH_REMATCH[#]:1}" # loop free: reuse fmtstr
declare -a arr=( "${BASH_REMATCH[#]:1}" ) # copy array for later
The way this works is to perform an expansion of string which substitutes each single character for (.), then match this generated regular expression with grouping to capture each individual character into BASH_REMATCH[]. Index 0 is set to the entire string, since that special array is read-only you cannot remove it, note the :1 when the array is expanded to skip over index 0, if needed.
Some quick testing for non-trivial strings (>64 chars) shows this method is substantially faster than one using bash string and array operations.
The above will work with strings containing newlines, =~ supports POSIX ERE where . matches anything except NUL by default, i.e. the regex is compiled without REG_NEWLINE. (The behaviour of POSIX text processing utilities is allowed to be different by default in this respect, and usually is.)
Second option, using printf:
string="wonkabars"
ii=0
while printf "%s%n" "${string:ii++:1}" xx; do
((xx)) && printf "\n" || break
done
This loop increments index ii to print one character at a time, and breaks out when there are no characters left. This would be even simpler if the bash printf returned the number of character printed (as in C) rather than an error status, instead the number of characters printed is captured in xx using %n. (This works at least back as far as bash-2.05b.)
With bash-3.1 and printf -v var you have slightly more flexibility, and can avoid falling off the end of the string should you be doing something other than printing the characters, e.g. to create an array:
declare -a arr
ii=0
while printf -v cc "%s%n" "${string:(ii++):1}" xx; do
((xx)) && arr+=("$cc") || break
done
If your string is stored in variable x, this produces an array y with the individual characters:
i=0
while [ $i -lt ${#x} ]; do y[$i]=${x:$i:1}; i=$((i+1));done
The most simple, complete and elegant solution:
$ read -a ARRAY <<< $(echo "abcdefg" | sed 's/./& /g')
and test
$ echo ${ARRAY[0]}
a
$ echo ${ARRAY[1]}
b
Explanation: read -a reads the stdin as an array and assigns it to the variable ARRAY treating spaces as delimiter for each array item.
The evaluation of echoing the string to sed just add needed spaces between each character.
We are using Here String (<<<) to feed the stdin of the read command.
I have found that the following works the best:
array=( `echo string | grep -o . ` )
(note the backticks)
then if you do: echo ${array[#]} ,
you get: s t r i n g
or: echo ${array[2]} ,
you get: r
Pure Bash solution with no loop:
#!/usr/bin/env bash
str='The quick brown fox jumps over a lazy dog.'
# Need extglob for the replacement pattern
shopt -s extglob
# Split string characters into array (skip first record)
# Character 037 is the octal representation of ASCII Record Separator
# so it can capture all other characters in the string, including spaces.
IFS= mapfile -s1 -t -d $'\37' array <<<"${str//?()/$'\37'}"
# Strip out captured trailing newline of here-string in last record
array[-1]="${array[-1]%?}"
# Debug print array
declare -p array
string=hello123
for i in $(seq 0 ${#string})
do array[$i]=${string:$i:1}
done
echo "zero element of array is [${array[0]}]"
echo "entire array is [${array[#]}]"
The zero element of array is [h]. The entire array is [h e l l o 1 2 3 ].
Yet another on :), the stated question simply says 'Split string into character array' and don't say much about the state of the receiving array, and don't say much about special chars like and control chars.
My assumption is that if I want to split a string into an array of chars I want the receiving array containing just that string and no left over from previous runs, yet preserve any special chars.
For instance the proposed solution family like
for (( i=0 ; i < ${#x} ; i++ )); do y[i]=${x:i:1}; done
Have left overs in the target array.
$ y=(1 2 3 4 5 6 7 8)
$ x=abc
$ for (( i=0 ; i < ${#x} ; i++ )); do y[i]=${x:i:1}; done
$ printf '%s ' "${y[#]}"
a b c 4 5 6 7 8
Beside writing the long line each time we want to split a problem, so why not hide all this into a function we can keep is a package source file, with a API like
s2a "Long string" ArrayName
I got this one that seems to do the job.
$ s2a()
> { [ "$2" ] && typeset -n __=$2 && unset $2;
> [ "$1" ] && __+=("${1:0:1}") && s2a "${1:1}"
> }
$ a=(1 2 3 4 5 6 7 8 9 0) ; printf '%s ' "${a[#]}"
1 2 3 4 5 6 7 8 9 0
$ s2a "Split It" a ; printf '%s ' "${a[#]}"
S p l i t I t
If the text can contain spaces:
eval a=( $(echo "this is a test" | sed "s/\(.\)/'\1' /g") )
$ echo hello | awk NF=NF FS=
h e l l o
Or
$ echo hello | awk '$0=RT' RS=[[:alnum:]]
h
e
l
l
o
I know this is a "bash" question, but please let me show you the perfect solution in zsh, a shell very popular these days:
string='this is a string'
string_array=(${(s::)string}) #Parameter expansion. And that's it!
print ${(t)string_array} -> type array
print $#string_array -> 16 items
This is an old post/thread but with a new feature of bash v5.2+ using the shell option patsub_replacement and the =~ operator for regex. More or less same with #mr.spuratic post/answer.
str='There can be only one, the Highlander.'
regexp="${str//?/(&)}"
[[ "$str" =~ $regexp ]] &&
printf '%s\n' "${BASH_REMATCH[#]:1}"
Or by just: (which includes the whole string at index 0)
declare -p BASH_REMATCH
If that is not desired, one can remove the value of the first index (index 0), with
unset -v 'BASH_REMATCH[0]'
instead of using printf or echo to print the value of the array BASH_REMATCH
One can check/see the value of the variable "$regexp" with either
declare -p regexp
Output
declare -- regexp="(T)(h)(e)(r)(e)( )(c)(a)(n)( )(b)(e)( )(o)(n)(l)(y)( )(o)(n)(e)(,)( )(t)(h)(e)( )(H)(i)(g)(h)(l)(a)(n)(d)(e)(r)(.)"
or
echo "$regexp"
Using it in a script, one might want to test if the shopt is enabled or not, although the manual says it is on/enabled by default.
Something like.
if ! shopt -q patsub_replacement; then
shopt -s patsub_replacement
fi
But yeah, check the bash version too! If you're not sure which version of bash is in use.
if ! ((BASH_VERSINFO[0] >= 5 && BASH_VERSINFO[1] >= 2)); then
printf 'No dice! bash version 5.2+ is required!\n' >&2
exit 1
fi
Space can be excluded from regexp variable, change it from
regexp="${str//?/(&)}"
To
regexp="${str//[! ]/(&)}"
and the output is:
declare -- regexp="(T)(h)(e)(r)(e) (c)(a)(n) (b)(e) (o)(n)(l)(y) (o)(n)(e) (t)(h)(e) (H)(i)(g)(h)(l)(a)(n)(d)(e)(r)(.)"
Maybe not as efficient as the other post/answer but it is still a solution/option.
If you want to store this in an array, you can do this:
string=foo
unset chars
declare -a chars
while read -N 1
do
chars[${#chars[#]}]="$REPLY"
done <<<"$string"x
unset chars[$((${#chars[#]} - 1))]
unset chars[$((${#chars[#]} - 1))]
echo "Array: ${chars[#]}"
Array: f o o
echo "Array length: ${#chars[#]}"
Array length: 3
The final x is necessary to handle the fact that a newline is appended after $string if it doesn't contain one.
If you want to use NUL-separated characters, you can try this:
echo -n "$string" | while read -N 1
do
printf %s "$REPLY"
printf '\0'
done
AWK is quite convenient:
a='123'; echo $a | awk 'BEGIN{FS="";OFS=" "} {print $1,$2,$3}'
where FS and OFS is delimiter for read-in and print-out
For those who landed here searching how to do this in fish:
We can use the builtin string command (since v2.3.0) for string manipulation.
↪ string split '' abc
a
b
c
The output is a list, so array operations will work.
↪ for c in (string split '' abc)
echo char is $c
end
char is a
char is b
char is c
Here's a more complex example iterating over the string with an index.
↪ set --local chars (string split '' abc)
for i in (seq (count $chars))
echo $i: $chars[$i]
end
1: a
2: b
3: c
zsh solution: To put the scalar string variable into arr, which will be an array:
arr=(${(ps::)string})
If you also need support for strings with newlines, you can do:
str2arr(){ local string="$1"; mapfile -d $'\0' Chars < <(for i in $(seq 0 $((${#string}-1))); do printf '%s\u0000' "${string:$i:1}"; done); printf '%s' "(${Chars[*]#Q})" ;}
string=$(printf '%b' "apa\nbepa")
declare -a MyString=$(str2arr "$string")
declare -p MyString
# prints declare -a MyString=([0]="a" [1]="p" [2]="a" [3]=$'\n' [4]="b" [5]="e" [6]="p" [7]="a")
As a response to Alexandro de Oliveira, I think the following is more elegant or at least more intuitive:
while read -r -n1 c ; do arr+=("$c") ; done <<<"hejsan"
declare -r some_string='abcdefghijklmnopqrstuvwxyz'
declare -a some_array
declare -i idx
for ((idx = 0; idx < ${#some_string}; ++idx)); do
some_array+=("${some_string:idx:1}")
done
for idx in "${!some_array[#]}"; do
echo "$((idx)): ${some_array[idx]}"
done
Pure bash, no loop.
Another solution, similar to/adapted from Léa Gris' solution, but using read -a instead of readarray/mapfile :
#!/usr/bin/env bash
str='azerty'
# Need extglob for the replacement pattern
shopt -s extglob
# Split string characters into array
# ${str//?()/$'\x1F'} replace each character "c" with "^_c".
# ^_ (Control-_, 0x1f) is Unit Separator (US), you can choose another
# character.
IFS=$'\x1F' read -ra array <<< "${str//?()/$'\x1F'}"
# now, array[0] contains an empty string and the rest of array (starting
# from index 1) contains the original string characters :
declare -p array
# Or, if you prefer to keep the array "clean", you can delete
# the first element and pack the array :
unset array[0]
array=("${array[#]}")
declare -p array
However, I prefer the shorter (and easier to understand for me), where we remove the initial 0x1f before assigning the array :
#!/usr/bin/env bash
str='azerty'
shopt -s extglob
tmp="${str//?()/$'\x1F'}" # same as code above
tmp=${tmp#$'\x1F'} # remove initial 0x1f
IFS=$'\x1F' read -ra array <<< "$tmp" # assign array
declare -p array # verification