I am been trying to download an sln from Azure DevOps but I get this message that wants me to remap the local path. The current mapped local is correct where I want it on my desktop but this pop-up comes up and then I can't open the project file.
GlobalSection(TeamFoundationVersionControl) = preSolution
SccNumberOfProjects = 3
SccEnterpriseProvider = {4CA58AB2-18FA-4F8D-95D4-32DDF27D184C}
SccTeamFoundationServer = https://nccn.visualstudio.com/defaultcollection
SccProjectUniqueName0 = ..\\NCCN\u0020Libraries\\GuidelineDataLayer\\GuidelineDataLayer\\GuidelineDataLayer.csproj
SccProjectName0 = ../../NCCN\u0020Libraries/GuidelineDataLayer/GuidelineDataLayer
SccLocalPath0 = ..\\NCCN\u0020Libraries\\GuidelineDataLayer\\GuidelineDataLayer
SccLocalPath1 = .
SccProjectUniqueName2 = NccnWebApi\\Nccn.WebService.GAT.csproj
SccProjectName2 = NccnWebApi
SccLocalPath2 = NccnWebApi
EndGlobalSection
Related
I need to use python 3 to create a shortcut to go to some site. The file must have the extension ".url". How can I do this?
I found some code but it doesn't work for me
bmurl = card.url # some link
bmpath = path_to_card + "link.url" # path to file
ws = win32com.client.Dispatch("wscript.shell")
scut = ws.CreateShortcut(bmpath)
scut.TargetPath = bmurl # in this place I got an error
scut.Save()
please below , do you know how i can have this .py in a notebook in AML portal (which is then run from a pipeline) to create a file in the same notebook portal directory? i.e. /mnt/batch/tasks/shared/LS_root/mounts/clusters/USERxxx
The pipeline seems to create the file in its own temp memory directory:
ws = Workspace.from_config()
source_directory='/mnt/batch/tasks/shared/LS_root/mounts/clusters/USERxxxx.'
print('Source directory for the step is {}.'.format(os.path.realpath(source_directory)))
aml_run_config = RunConfiguration()
aml_run_config.target = "XXXXX"
aml_run_config.environment.python.user_managed_dependencies = False
aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
conda_packages=['pandas','openpyxl','pyodbc','sqlalchemy'],
pip_packages=['pandas==1.4.4','openpyxl','pyodbc','sqlalchemy','azureml-sdk', 'azureml-dataprep[fuse,pandas]'],
pin_sdk_version=False)
aml_run_config = RunConfiguration(framework = "python", conda_dependencies =
aml_run_config.environment.python.conda_dependencies )
step1 = PythonScriptStep(name="hello-step1",
script_name="hello_world.py",
compute_target="XXXXX",
runconfig=aml_run_config,
source_directory=source_directory,
allow_reuse=True)
pipeline1 = Pipeline(workspace=ws, steps=step1)
pipeline1.validate()
pipeline_run1 = Experiment(ws, 'trigger-hello-experiment').submit(pipeline1, regenerate_outputs=False)
print("Pipeline is submitted for execution")
many thanks
Here's my sample code which works:
import os, io, dropbox
def createFolder(dropboxBaseFolder, newFolder):
# creating a temp dummy destination file path
dummyFileTo = dropboxBaseFolder + newFolder + '/' + 'temp.bin'
# creating a virtual in-memory binary file
f = io.BytesIO(b"\x00")
# uploading the dummy file in order to cause creation of the containing folder
dbx.files_upload(f.read(), dummyFileTo)
# now that the folder is created, delete the dummy file
dbx.files_delete_v2(dummyFileTo)
accessToken = '....'
dbx = dropbox.Dropbox(accessToken)
dropboxBaseDir = '/test_dropbox'
dropboxNewSubDir = '/new_empty_sub_dir'
createFolder(dropboxBaseDir, dropboxNewSubDir)
But is there a more efficient/simpler way to do the task ?
Yes, as Ronald mentioned in the comments, you can use the files_create_folder_v2 method to create a new folder.
That would look like this, modifying your code:
import dropbox
accessToken = '....'
dbx = dropbox.Dropbox(accessToken)
dropboxBaseDir = '/test_dropbox'
dropboxNewSubDir = '/new_empty_sub_dir'
res = dbx.files_create_folder_v2(dropboxBaseDir + dropboxNewSubDir)
# access the information for the newly created folder in `res`
How can I use dcmprscp to receive from SCU Printer a DICOM file and save it, I'm using dcmtk 3.6 & I've some trouble to use it with the default help, this's what I'm doing in CMD:
dcmprscp.exe --config dcmpstat.cfg --printer PRINT2FILE
each time I receive this messagebut (database\index.da) don't exsist in windows
W: $dcmtk: dcmprscp v3.6.0 2011-01-06 $
W: 2016-02-21 00:08:09
W: started
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
I try to follow some tip, but the same result :
http://www.programmershare.com/2468333/
http://www.programmershare.com/3020601/
and this's my printer's PRINT2FILE config :
[PRINT2FILE]
hostname = localhost
type = LOCALPRINTER
description = PRINT2FILE
port = 20006
aetitle = PRINT2FILE
DisableNewVRs = true
FilmDestination = MAGAZINE\PROCESSOR\BIN_1\BIN_2
SupportsPresentationLUT = true
PresentationLUTinFilmSession = true
PresentationLUTMatchRequired = true
PresentationLUTPreferSCPRendering = false
SupportsImageSize = true
SmoothingType = 0\1\2\3\4\5\6\7\8\9\10\11\12\13\14\15
BorderDensity = BLACK\WHITE\150
EmptyImageDensity = BLACK\WHITE\150
MaxDensity = 320\310\300\290\280\270
MinDensity = 20\25\30\35\40\45\50
Annotation = 2\ANNOTATION
Configuration_1 = PERCEPTION_LUT=OEM001
Configuration_2 = PERCEPTION_LUT=KANAMORI
Configuration_3 = ANNOTATION1=FILE1
Configuration_4 = ANNOTATION1=PATID
Configuration_5 = WINDOW_WIDTH=256\WINDOW_CENTER=128
Supports12Bit = true
SupportsDecimateCrop = false
SupportsTrim = true
DisplayFormat=1,1\2,1\1,2\2,2\3,2\2,3\3,3\4,3\5,3\3,4\4,4\5,4\6,4\3,5\4,5\5,5\6,5\4,6\5,6
FilmSizeID = 8INX10IN\11INX14IN\14INX14IN\14INX17IN
MediumType = PAPER\CLEAR FILM\BLUE FILM
MagnificationType = REPLICATE\BILINEAR\CUBIC
The documentation of the "dcmprscp" tool says:
The dcmprscp utility implements the DICOM Basic Grayscale Print
Management Service Class as SCP. It also supports the optional
Presentation LUT SOP Class. The utility is intended for use within the
DICOMscope viewer.
That means, it is usually not run from the command line (as most of the other DCMTK tools) but started automatically in the background by DICOMscope.
Anyway, I think the error message is clear:
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
Did you check whether there is a subdirectory "database" and whether the "index.dat" file exists in this directory? If you should ask why there is a need for a "database" then please read the next paragraph of the documentation:
The dcmprscp utility accepts print jobs from a remote Print SCU.
It does not create real hardcopies but stores print jobs in the local
DICOMscope database as a set of Stored Print objects (one per page)
and Hardcopy Grayscale images (one per film box N-SET)
How to stream a log file from Windows 7 to HDFS in Linux ?
Flume in Windows is giving error
I have installed 'flume-node-0.9.3' on Windows 7 (Node 1) . The 'flumenode' service is running and localhost:35862 is accessible.
In Windows, the log file is located at 'C:/logs/Weblogic.log'
The Flume agent in CentOS Linux (Node 2) is also running.
In Windows machine, JAVA_HOME variable is set to "C:\Program Files\Java\jre7"
The Java.exe file is located at "C:\Program Files\Java\jre7\bin\java.exe"
Flume node is installed at " C:\Program Files\Cloudera\Flume 0.9.3"
Here is the flume-src.conf file placed inside 'conf' folder of Flume on Windows 7 (Node 1)
source_agent.sources = weblogic_server
source_agent.sources.weblogic_server.type = exec
source_agent.sources.weblogic_server.command = tail -f C:/logs/Weblogic.log
source_agent.sources.weblogic_server.batchSize = 1
source_agent.sources.weblogic_server.channels = memoryChannel
source_agent.sources.weblogic_server.interceptors = itime ihost itype
source_agent.sources.weblogic_server.interceptors.itime.type = timestamp
source_agent.sources.weblogic_server.interceptors.ihost.type = host
source_agent.sources.weblogic_server.interceptors.ihost.useIP = false
source_agent.sources.weblogic_server.interceptors.ihost.hostHeader = host
source_agent.sources.weblogic_server.interceptors.itype.type = static
source_agent.sources.weblogic_server.interceptors.itype.key = log_type
source_agent.sources.weblogic_server.interceptors.itype.value = apache_access_combined
source_agent.channels = memoryChannel
source_agent.channels.memoryChannel.type = memory
source_agent.channels.memoryChannel.capacity = 100
source_agent.sinks = avro_sink
source_agent.sinks.avro_sink.type = avro
source_agent.sinks.avro_sink.channel = memoryChannel
source_agent.sinks.avro_sink.hostname = 10.10.201.40
source_agent.sinks.avro_sink.port = 41414
I tried to run the above mentioned file by executing the following command inside the Flume folder:
C:\Program Files\Cloudera\Flume 0.9.3>"C:\Program Files\Java\jre7\bin\java.exe"
-Xmx20m -Dlog4j.configuration=file:///%CD%\conf\log4j.properties -cp "C:\Program Files\Cloudera\Flume 0.9.3\lib*" org.apache.flume.node.Application
-f C:\Program Files\Cloudera\Flume 0.9.3\conf\flume-src.conf -n source_agent
But it gives the following message:
Error: Could not find or load main class Files\Cloudera\Flume
Here is the trg-node.conf file running in CentOS (Node 2). The CentOS node is working fine:
collector.sources = AvroIn
collector.sources.AvroIn.type = avro
collector.sources.AvroIn.bind = 0.0.0.0
collector.sources.AvroIn.port = 41414
collector.sources.AvroIn.channels = mc1 mc2
collector.channels = mc1 mc2
collector.channels.mc1.type = memory
collector.channels.mc1.capacity = 100
collector.channels.mc2.type = memory
collector.channels.mc2.capacity = 100
collector.sinks = HadoopOut
collector.sinks.HadoopOut.type = hdfs
collector.sinks.HadoopOut.channel = mc2
collector.sinks.HadoopOut.hdfs.path =/user/root
collector.sinks.HadoopOut.hdfs.callTimeout = 150000
collector.sinks.HadoopOut.hdfs.fileType = DataStream
collector.sinks.HadoopOut.hdfs.writeFormat = Text
collector.sinks.HadoopOut.hdfs.rollSize = 0
collector.sinks.HadoopOut.hdfs.rollCount = 10000
collector.sinks.HadoopOut.hdfs.rollInterval = 600
The problem is due to the white space between Program and Files in this path:
C:**Program Files**\Cloudera\Flume 0.9.3
Consider installing Flume in a path without whitespaces, it will work like a charm.