I am needing to check that our Master Schedule is reflecting all physical work that falls into our SOW(Statement of Work).
I have a sheet that does this fairly simply, but I want to make it a simple cut and paste from our Schedule export, and that way it reflects that it is looking at current data vs data from several months ago.
I can handle the date part, but I can't seem to get the logic in my head right for how I should reference the data to double-check compliance.
Each SOW Paragraph has a number assigned to it, and that is what each task would refer to.
Activity Type
Activity ID
CAM
WBS
OBS
% Complete
SOW
ASAP
49.3.4.2.3.2.030
Pam
77.4.1.2
C.11.1
20%
1.0
ASAP
C2_4.20.010.011
Shaun
77.3.1
C.5.4.2
15%
3.6.1.2
ASAP
C2_4.20.010.012
Shaun
77.3.1
C.5.4.2
0%
3.6.1.2
ASAP
69.HP.5.1.2.15.30
Mark
77.1.1
C.11.1
50%
3.2
ASAP
C2_6.1.5.15
Brett
77.2.1
C.3.2.1
100%
5.0
ASAP
C2_2.10.55
Susan
77.2.1
C.5.4.1
60%
6.0
ASAP
29.3.2.11.1.20
Eric
77.4.1
C.11.1
20%
1.0
ASAP
1Z.DIL.0025
Adam
77.1.1
C.1.1
10%
1.1.2
Say this image is an export from our schedule.
I need to make sure that those SOW numbers are lined up with the full SOW for just the ID numbers with a C2. Below would be the example result.
SOW Number
In Master Schedule?
1.0
No
1.1
No
1.1.1
Yes
1.1.2
No
1.1.3
No
3.0
No
3.2
No
3.3
No
3.5
No
3.6
No
3.6.1
No
3.6.1.2
Yes
4.0
No
5.0
Yes
5.1
No
5.2
No
6.0
Yes
The logic I'm trying to figure out with the formulas is:
If A# is in MS_Tab $G:$G, does the ID number start with a C2? If Yes, list "Yes" under "In MS?", else list "No"
I tried to do a VLOOKUP first and of course, that is incorrect since it only looks for the first instance of the search. Then I thought of doing a VLOOKUPpaired with an HLOOKUP, but that still falls into the same problem. I read I could do Index and Match, but I haven't been able to get the logic to work for me.
Note: I cannot change the ID's to make my life easier, this is using an excel export from our master schedule.
Related
I have modified some VB sample code to get most of what I need done using the QuickBooks SDK in an app launched from Excel using VBA. I am able to produce both a Time by Job Summary report and a Job Estimates vs. Actuals report, but for the latter I need to produce filtered copies of it for each customer:job reference number, and I'm not sure what the proper syntax is for this even after looking over the specific query in the API Reference for QB Desktop.
I'm fairly sure that this needs to be done during the request phase. Also, I'm using QBFC, so I have tried various combinations that seem logical, but still haven't received the desired output. If it helps, an example of what is needed for the filter would be like: 20-5050 Dan Barton Trucks. Below is my code for the request:
Set jobRQ = requestSet.AppendJobReportQueryRq
customerRef = "20-5050 Dan Barton Trucks"
With jobRQ
.JobReportType.SetValue ENJobReportType.jrtJobEstimatesVsActualsSummary
.ReportEntityFilter.ORReportEntityFilter.EntityTypeFilter.SetValue etfCustomer
' .ReportEntityFilter.ORReportEntityFilter.FullNameList.Add (customerRefID)
.ORReportPeriod.ReportPeriod.FromReportDate.SetValue dateFrom
.ORReportPeriod.ReportPeriod.ToReportDate.SetValue dateTo
.SummarizeColumnsBy.SetValue scbTotalOnly
.IncludeSubcolumns.SetValue True
.DisplayReport.SetValue True
End With
I have commented out the line that doesn't work.
I am teaching a man the basics of QGis for a project he needs to do at his work. He has very little computer knowledge and would like to standardise the work as much as possible (specific workarounds would complicate it too much for him). His QGis version is 3.16 "Hannover" and as this is a work laptop he does not have permission to download a newer version.
We have been having problem with one specific table. The first few rows are below, written exactly as they are originally.
Baum-Nr. Baumart BHD Alter Y X Biotopbaum Klassifizierung Bemerkungen
1 Buche 86 120 49.1356 11.0488 A Altbaum Freistellen !!!
2 Kiefer 45 100 49.13561 11.04883 Hlb,Bs,Th Höhlenbaum
3 Kiefer 32 100 49.13571 11.04579 Hlb,Sw,Th Höhlenbaum
4 Kiefer 74 120 49.13513 11.0495 A Altbaum
After adding it from Excel to QGis through "add vector layer", the header "Klassifizierung" becomes one of the coordinates and I believe one of the columns are switched (unfortunately, I can't remember specifics. This is a small side job and I haven't had time to look into it for days. I should have taken a photo, but this isn't possible anymore). We have attempted to copy the column into a new Excel document and transferring it to QGis again, and this time the headers were shoved one cell to the right such that "Y" was placed over "X" and "Biotopbaum" over "Klassifizierung", for example.
I could not find a way to fix the import problem in his laptop. He e-mailed me the problematic table and I opened it successfully in my QGis 3.26 "Buenos Aires".
I believe this may be a problem with his QGis version, but it is curious that we only encountered it with this one table. All other tables we have worked with have the same headers and the same kind of data on their respective rows.
Is this a known problem, or have other people faced similar situations? Could someone explain what could be causing it? Would there be a way to fix it such that we can successfully import the table without having to edit it in QGis? This is not a solution the man would accept.
Thank you in advance.
Remove the commas in the Biotopbaum field or replace them with a less common delimiter. In fact, remove all punctionation (e.g., Baum-Nr. >> remove the period ".").
Also save the table into a csv format and try to import.
I'm having a problem inputting tab delimited files into the stanford classifier.
Although I was able to successfully walk through all the included stanford tutorials, including the newsgroup tutorial, when I try to input my own training and test data it doesn't load properly.
At first I thought the problem was that I was saving the data into a tab delimited file using an Excel spreadsheet and it was some kind of encoding issue.
But then I got exactly the same results when I did the following. First I literally typed the demo data below into gedit, making sure to use a tab between the politics/sports class and the ensuing text:
politics Obama today announced a new immigration policy.
sports The NBA all-star game was last weekend.
politics Both parties are eyeing the next midterm elections.
politics Congress votes tomorrow on electoral reforms.
sports The Lakers lost again last night, 102-100.
politics The Supreme Court will rule on gay marriage this spring.
sports The Red Sox report to spring training in two weeks.
sports Messi set a world record for goals in a calendar year in 2012.
politics The Senate will vote on a new budget proposal next week.
politics The President declared on Friday that he will veto any budget that doesn't include revenue increases.
I saved that as myproject/demo-train.txt and a similar file as myproject/demo-test.txt.
I then ran the following:
java -mx1800m -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier
-trainFile myproject/demo-train.txt -testFile myproject/demo-test.txt
The good news: this actually ran without throwing any errors.
The bad news: since it doesn't extract any features, it can't actually estimate a real model and the probability defaults to 1/n for each item, where n is the number of classes.
So then I ran the same command but with two basic options specified:
java -mx1800m -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier
-trainFile myproject/demo-train.txt -testFile myproject/demo-test.txt -2.useSplitWords =2.splitWordsRegexp "\s+"
That yielded:
Exception in thread "main" java.lang.RuntimeException: Training dataset could not be processed
at edu.stanford.nlp.classify.ColumnDataClassifier.readDataset(ColumnDataClassifier.java:402)
at edu.stanford.nlp.classify.ColumnDataClassifier.readTrainingExamples (ColumnDataClassifier.java:317)
at edu.stanford.nlp.classify.ColumnDataClassifier.trainClassifier(ColumnDataClassifier.java:1652)
at edu.stanford.nlp.classify.ColumnDataClassifier.main(ColumnDataClassifier.java:1628)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
at edu.stanford.nlp.classify.ColumnDataClassifier.makeDatum(ColumnDataClassifier.java:670)
at edu.stanford.nlp.classify.ColumnDataClassifier.makeDatumFromLine(ColumnDataClassifier.java:267)
at edu.stanford.nlp.classify.ColumnDataClassifier.makeDatum(ColumnDataClassifier.java:396)
... 3 more
These are exactly the same results I get when I used the real data I saved from Excel.
Even more though, I don't know how to make sense of the ArrayIndexOutOfBoundsException. When I used readline in python to print out the raw strings for both the demo files I created and the tutorial files that worked, nothing about the formatting seemed different. So I don't know why this exception would be raised with one set of files but not the other.
Finally, one other quirk. At one point I thought maybe line breaks were the problem. So I deleted all line breaks from the demo files while preserving tab breaks and ran the same command:
java -mx1800m -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier
-trainFile myproject/demo-train.txt -testFile myproject/demo-test.txt -2.useSplitWords =2.splitWordsRegexp "\s+"
Surprisingly, this time no java exceptions are thrown. But again, it's worthless: it treats the entire file as one observation, and can't properly fit a model as a result.
I've spent 8 hours on this now and have exhausted everything I can think of. I'm new to Java but I don't think that should be an issue here -- according to Stanford's API documentation for ColumnDataClassifier, all that's required is a tab delimited file.
Any help would be MUCH appreciated.
One last note: I've run these same commands with the same files on both Windows and Ubuntu, and the results are the same in each.
Use a properties file. In the example Stanford classifier example
trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
2.useSplitWords=true
2.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
2.splitWordsIgnoreRegexp=\\s+
The number 2 at the start of lines 3, 4 and 5 signifies the column in your tsv file. So in your case you would use
trainFile=20news-bydate-devtrain-stanford-classifier.txt
testFile=20news-bydate-devtest-stanford-classifier.txt
1.useSplitWords=true
1.splitWordsTokenizerRegexp=[\\p{L}][\\p{L}0-9]*|(?:\\$ ?)?[0-9]+(?:\\.[0-9]{2})?%?|\\s+|[\\x80-\\uFFFD]|.
1.splitWordsIgnoreRegexp=\\s+
or if you want to run with command line arguments
java -mx1800m -cp stanford-classifier.jar edu.stanford.nlp.classify.ColumnDataClassifier -trainFile myproject/demo-train.txt -testFile myproject/demo-test.txt -1.useSplitWords =1.splitWordsRegexp "\s+"
I've faced the same error as you.
Pay attention on tabs in the text you are classifying.
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
This means, that at some point classifier expects array of 3 elements, after it splits the string with tabs.
I've run a method, that counts amount of tabs in each line, and if at some line you have not two of them - here is an error.
Using the lists webservice I retrieve the items from a list. In the XML returned I can see the attribute ows__IsCurrentVersion="1" which I assume is the same as the file object model (i.e. a boolean to say if it is current or not).
However I do not see a way to identify what revision it is? What should that attribute be?
By 'revision' do you mean version? If so, you are probably looking for one of these attributes:
ows_owshiddenversion is an Integer (ex: 8)
ows__UIVersion is an Integer (ex: 4096)
ows__UIVersionString is a String (ex: 8.0)
*edit*
Here is some more information after testing it using a Document Library. You should also check the other comments by Hugo and Janis, as they have some good information.
ows_owshiddenversion ows__UIVersion ows__UIVersionString
1 512 1.0
2 513 1.1
3 514 1.2
4 1024 2.0
5 1025 2.1
Most likely, what you are looking for is ows_owshiddenversion.
The columns in the list you are looking for are VersionID (usually 512, 1024, etc.) and VersionLabel (usually 1.0, 2.0, 3.0) and the attributes that Kit Menke pointed out will give you that information if you are using the Web Service.
You might want to have a look at the Versions web service if you need to do more work with the web services : http://server/_vti_bin/versions.asmx
Ill just add some info. You can use UIVersion (which is version id) or UIVersionString (which is user-friendly version label)
For example
label 0.1 -> id 1
label 1.0 -> id 512
label 1.1 -> id 513
label 2.0 -> 1024
label 2.2 -> 1026.
IsCurrentVersion will be true for latest MAJOR (published) version (2.0 or 3.0, but not 3.1). Minor version number is draft version.
Some insights about versioning i wrote in my own question & answer.
I recently started playing around with SubSonic 2.2 (only 2.2 because I didn't find any Oracle t4 templates at the time). That aside, I have been noticing that I can run a query on table a and field b will have a value of 1. If I went into Sql Tools or Oracle Developer and changed field b to a value of 2, SubSonic's LoadByKey functions still returns an object with field b having a value of 1.
In case that is hard to read.
var id = "primary key";
x.LoadByKey(id);
Console.Write(x.b); -> yields 1
I can go change this value in another program and rerun the code and it is always 1 regardless.
Any ideas?
The only thing I can think is that it's an app issue, or a driver-level issue. We don't implement any kind of caching.