I'm using dcmtk 3.6.1 on Windows 8. I cannot access a PACS server using dcmqrscp and echoscu and the following dcmqrscp.cfg (error is Called AE Title is not recognised).
NetworkTCPPort = 11112
MaxPDUSize = 16384
MaxAssociations = 16
HostTable BEGIN
PACS1 = (PACS_SRC1, localhost, 11112)
PACSSRC = PACS1
HostTable END
VendorTable BEGIN
"PACS source" = PACSSRC
VendorTable END
AETable BEGIN
PACS_SRC G:\develop\studyaccess\test\PACS_SRC RW (100, 1024mb) PACSSRC
AETable END
Commands:
dcmqrscp.exe -v -d --config dcmqrscp.cfg --propose-lossless 11112 > dcmqrscp.out
echoscu.exe -v -d localhost 11112 -aec PACS_SRC -aet echoscu
echoscu.exe -v -d localhost 11112 -aec PACS1 -aet echoscu
However, if I use ANY it does work:
PACS_SRC G:\develop\studyaccess\test\PACS_SRC RW (100, 1024mb) ANY
This indicates that the databases, paths and data are correct, but something else is wrong. I've turned Windows Firewall off.
I've also tried using dcmtk executables generated when compiling CTK (based on dcmtk), but they have the same result. CTK itself, works with C-GET but not with C-STORE (as part of C-MOVE).
Any other ideas?
Eddie
Your two calls of the echoscu tool use "echoscu" as the Calling AE Title (-aet option). However, if you specify "PACSSRC" instead of "ANY" in the dcmqrscp.cfg file, this name is checked and not found to be valid for this "company". Your second call of echoscu (with option "-aec PACS1") will not work since "PACS1" is not specified as a known storage area in the "AETable" section of the config file.
See documentation file dcmqrcnf.txt or this HOWTO for details.
Related
I am on Qubes OS. That means fedora-25 as dom0. I would like to change the configs for "notification area" alias "systray" plugin of xfce. How can I do it. I would like to delete/add one item.
The Gui only gives me the option to hide with ugly arrow on the side or to "clear all known applications". However, regarding the last option I am afraid to lose the notification area as it is and never get it back.
I looked with the "find" command for "xfce4" and "xfce4-plugins" and so on. All the files I could find, e.g. in ~/.config/xfce4, could not help me. I can nowhere find a config file for the plugin.
Thanks in advance :)
Known applications is stored as an array in xfconf, in the xfce4-panel channel and under the property /plugins/plugin-[id]/known-items, where the plugin id is dynamic and depends on the order plugins were added to panel.
You could hack your way messing with ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-panel.xml but I strongly advise you not to, instead use xfconf-query to read and set values.
I'm going to write down some snippets below so you can use them to craft a script that suits your needs:
# Find the plugin id, can be empty if systray is not present
xfconf-query -c xfce4-panel -p /plugins -l -v | grep systray | grep -Po "plugin-\\d+" | head -n1
# Get array of current known apps
xfconf-query -c xfce4-panel -p /plugins/$PLUGIN_ID/known-items | tail -n +3
# Set array of known apps
xfconf-query -c xfce4-panel -p /plugins/$PLUGIN_ID/known-items -t string -s value1 -t string -s value2 ...
I'm running the command -ascp -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429 /SRR5907429 .sra ~/sra_download with Linux
and I get this error -
"user#host:" in all sources must match
What does this mean?How to solve it?
First,"-private"should be removed.Secondly,need to correct the space error in the sentence,example "SRR5907429 ".'ascp -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m anonftp#ftp.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429/SRR5907429.sra ~/sra_download'is the correct answer we need.enter image description here
Your problem:
the ascp syntax is:
Usage: ascp [OPTION] SRC... DEST
SRC to DEST, or multiple SRC to DEST dir
SRC, DEST format: [[user#]host:]PATH
Display full usage: -h,--help
You get this by simply executing ascp, get more with "ascp -h" and have a manual for it as well, or https://download.asperasoft.com/download/docs/entsrv/3.9.1/es_admin_linux/webhelp/index.html#dita/ascp_2.html
it is pretty much like "scp", but works also in "pull" mode.
so, you have:
options then one or multiple sources, then a single destination (always the last argument).
if the destination is: user#server:folder, then you do a push
if source is user#server:folder, then you do a pull
globally, you can only do a push or a pull at the same time. but there can be multiple sources, and always a single destination (on command line).
in you case you have:
options: -v -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh -k 1 -T -l200m
sources: anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429 /SRR5907429 .sra
destination:~/sra_download
the first source is: anonftp#ftp-private.ncbi.nlm.nih.gov:/sra/sra-instant/reads/ByRun/sra/SRR/SRR590/SRR5907429
the other sources are: /SRR5907429 .sra
so you specify one remote source, two local sources, and one local destination.
This is the error you get.
My advice:
do not use the legacy syntax, as you did, but instead, use the advanced syntax:
ascp [options] --mode=<send|recv> --user=<user> --host=<server> sources... destination
There are plenty of options, for instance, if all your source files are in the same folder, you can use: --source-prefix=
you can also use file list file (i.e. a file that contains the list of files you want to transfer, in case it is long and generated by a script) or even file par list file.
Note also, that there is an interesting front end for aspera command line transfers:
https://www.rubydoc.info/gems/asperalm
I am new to Bowtie. I am trying to use Bowtie for end to end local alignment. I've got this error message:
Could not locate a Bowtie index corresponding to basename "/bowtie2-index/hg19"
In my installation and in the bowtie2-index/hg19 folder there are six bt2 files. I am using the following command:
/opt/bowtie2/bowtie2-align-s --wrapper basic-0 -p 64 -x /mnt/miczfs/tide/bowtie2-index/hg19 -S /mnt/miczfs/tide/Data/chr2chr3/chr2chr3.sam -1 /mnt/miczfs/tide/Data/chr2chr3/chr2chr3.f1.fastq -2 /mnt/miczfs/tide/Data/chr2chr3/chr2chr3.f2.fastq
This is a perennial question, I guess the documentation isn't explicit enough here. By using -x /mnt/miczfs/tide/bowtie2-index/hg19, you're telling bowtie2 that you have files like /mnt/miczfs/tide/bowtie2-index/hg19.1.bt2 that it should use. You don't specify a folder, you specify a "basename". You probably meant -x /mnt/miczfs/tide/bowtie2-index/hg19/hg19 or something like that.
i've a problem with the commandline command "smbclient" of samba on arm.
I wrote a script to download files from a Windows Share.
Here the smb-part of this script.
smbclient //CNAME/SNAME -I0.0.0.0 -N -c "case_sensitive; cd folder; prompt; mget file"
echo $?
My problem ar the exit codes.
If the file is downloaded completely, the exit code is 0 (OK)
If the file cannot be downloaded, the exit code is 1 (OK)
If the testmaschine loses the connection to the share due downloading a file, the exit code is 0 (NOT GOOD), but error ("Lost connection...etc.") is written to console. (OK)
I tried it with two different versions.
samba-3.0.32
samba-3.6.19
Both the same.
Does someone know a good workaround (or smbclient-argument) to let my script know, that the download failed?
PS. I checked the smbclient sources. It looks like they forgot to set the exitcode. Because everytime there is another error the set the Errormessage and do an (e.g. exit(1)). But for timeouts, they only set the Errormessage.
Thank you in advance!
What would be best is to use the -E argument to smbclient and redirect 2>/errorlog from the command line. You can then check this file to see if any errors occurred.
Warning, the first line is always the Domain=......... so you may need to strip that line out.
Something like this:
smbclient Hostname -A authfile -E 1>log 2>errorlog <<-EOF
get foo
EOF
In the errorlog you should find something like below, your log file will be empty
Domain=[Hostname] OS=[Windows Server 2008 R2 Standard 7601 Service
Pack 1] Server=[Windows Server 2008 R2 Standard 6.1]
NT_STATUS_OBJECT_NAME_NOT_FOUND opening remote file \foo
Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.