I've been trying to run a proc logistic stepwise regression model using an ordinal outcome. Because I am trying to account for the assumption of proportional odds, several of my variables have uneven slopes. So, in my code, I am indicating both equalslopes and unequalslopes, however I continue to get this syntax error.
ERROR 22-322: Syntax error, expecting one of the following: ;, ABSFCONV, AGGREGATE, ALPHA, BACKWARD, BEST, BINWIDTH, BUILDRULE, CL,
CLODDS, CLPARM, CODING, CONVERGE, CORRB, COVB, CT, CTABLE, DETAILS, DSCALE, EXPB, FAST, FCONV, FIRTH, GCONV,
HIERARCHY, INCLUDE, INFLUENCE, IPLOTS, ITPRINT, L, LACKFIT, LINK, MAXITER, MAXSTEP, NOCHECK, NODUMMYPRINT, NOFIT,
NOINT, NOLOGSCALE, OFFSET, OUTROC, PARMLABEL, PCORR, PEVENT, PL, PLCL, PLCONV, PLRL, PPROB, PSCALE, RIDGING,
RISKLIMITS, ROCEPS, RSQUARE, SCALE, SELECTION, SEQUENTIAL, SINGULAR, SLE, SLENTRY, SLS, SLSTAY, START, STB, STEPWISE,
STOP, STOPRES, TECHNIQUE, UNEQUALSLOPES, WALDCL, WALDRL, XCONV.
ERROR 76-322: Syntax error, statement will be ignored.
I am sure I am putting in my code correct, even when I do not state any specific variables for either the equalslopes option and the unequalslopes option, I continue to receive the same message. Here is my code below.
proc logistic data=upper_limit;
class Profitrank dealer_state_id dlr_zip RUCA2 area_description/ param = ref;
model profitrank = num_booked num_approved num_apps bk_conversion_rt appr_rt num_new num_used Percentage_of_New Average_Vehicle_Age average_dti
average_pti average_revolving_balance average_revolving_balance_rate average_public_records average_inquiries average_inquiry_6_months
average_open_rev_trades average_term average_major_derog average_minor_derog average_open_install average_scorecard average_fico_score average_app_risk_score
average_LTV average_consumer_rate average_truist_rate Dealer_state_ID RUCA2/link=clogit selection=stepwise SLE=0.10 SLE=0.10 EQUALSLOPES=(appr_rt average_pti average_public_records
average_open_rev_trades average_major_derog average_minor_derog average_open_install average_fico_score average_app_risk_score RUCA2) unequalslopes details maxiter=500;
run;```
Any help with this would be greatly appreciated!
You need to use options that work with the version of SAS you are using. The EQUALSLOPES option was added in SAS/STAT 14.1 which is SAS 9.4m3 released in 2015.
Note that SAS is a subscription license. Which means you have already paid for the right to use the newest version. Get someone at your company to install the newest version of SAS. You might also need to use a newer version of Enterprise Guide to take advantage of the new features.
Related
I am trying to perform a basic merge operation to add nonexistent nodes and relationships to my graph by going through a csv file row by row. I'm using py2neo v4, and because there is basically no documentation or examples of how to use py2neo, I can't figure out how to actually get it done. This isn't my real code (it's very complicated to handle many different cases) but its structure is basically like this:
import py2neo as pn
graph = pn.Graph("bolt://localhost:###/", user="neo4j", password="py2neoSux")
matcher = pn.NodeMatcher(graph)
tx = graph.begin()
if (matcher.match("Prefecture", name="foo").first()) == None):
previousNode = pn.Node("Type1", name="fo0", yc=1)
else:
previousNode = matcher.match("Prefecture", name="foo").first())
thisNode = pn.Node("Type2", name="bar", yc=1)
tx.merge(previousNode)
tx.merge(thisNode)
theLink = pn.Relationship(thisNode, "PARTOF", previousNode)
tx.merge(theLink)
tx.commit()
Currently this throws the error
ValueError: Primary label and primary key are required for MERGE operation
the first time it needs to merge a node that it hasn't found (i.e., when creating a node). So then I change the line to:
tx.merge(thisNode,primary_label=list(thisNode.labels)[0], primary_key="name")
Which gives me the error IndexError: list index out of range from somewhere deep in the py2neo source code (....site-packages\py2neo\internal\operations.py", line 168, in merge_subgraph at node = nodes[i]). I tried to figure out what was going wrong there, but I couldn't decipher where the nodes list come from through various connections to other commands.
So, it currently matches and creates a few nodes without problem, but at some point it will match until it needs to create and then fails in trying to create that node (even though it is using the same code and doing the same thing under the same circumstances in a loop). It made it through all 20 rows in my sample once, but usually stops on the row 3-5.
I thought it had something to do with the transactions (see comments), but I get the same problem when I merge directly on the graph. Maybe it has to do with the py2neo merge function finding more identities for nodes than nodes. Maybe there is something wrong with how I specified my primarily label and/or key.
Because this error and code are opaque I have no idea how to move forward.
Anybody have any advice or instructions on merging nodes with py2neo?
Of course I'd like to know how to fix my current problem, but more generally I'd like to learn how to use this package. Examples, instructions, real documentation?
I am having a similar problem and just got done ripping my hair out to figure out what was wrong! SO! What I learned was that at least in my case.. and maybe yours too since we got similar error messages and were doing similar things. The problem lied for me in that I was trying to create a Node with a __primarykey__ field that had a different field name than the others.
PSEUDO EXAMPLE:
# in some for loop or complex code
node = Node("Example", name="Test",something="else")
node.__primarykey__ = "name"
<code merging or otherwise creating the node>
# later on in the loop you might have done something like this cause the field was null
node = Node("Example", something="new")
node.__primarykey__ = "something"
I hope this helps and was clear I'm still recovering from wrapping my head around things. If its not clear let me know and I'll revise.
Good luck.
Currently, I made a tool to rename view numbers (“Detail Number”) on a sheet based on their location on the sheet. Where this is breaking is the transactions. Im trying to do two transactions sequentially in Revit Python Shell. I also did this originally in dynamo, and that had a similar fail , so I know its something to do with transactions.
Transaction #1: Add a suffix (“-x”) to each detail number to ensure the new numbers won’t conflict (1 will be 1-x, 4 will be 4-x, etc)
Transaction #2: Change detail numbers with calculated new number based on viewport location (1-x will be 3, 4-x will be 2, etc)
Better visual explanation here: https://www.docdroid.net/EP1K9Di/161115-viewport-diagram-.pdf.html
Py File here: http://pastebin.com/7PyWA0gV
Attached is the python file, but essentially what im trying to do is:
# <---- Make unique numbers
t = Transaction(doc, 'Rename Detail Numbers')
t.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",getParam(viewport,"Detail Number")+"x")
t.Commit()
# <---- Do the thang
t2 = Transaction(doc, 'Rename Detail Numbers')
t2.Start()
for i, viewport in enumerate(viewports):
setParam(viewport, "Detail Number",detailViewNumberData[i])
t2.Commit()
Attached is py file
As I explained in my answer to your comment in the Revit API discussion forum, the behaviour you describe may well be caused by a need to regenerate between the transactions. The first modification does something, and the model needs to be regenerated before the modifications take full effect and are reflected in the parameter values that you query in the second transaction. You are accessing stale data. The Building Coder provides all the nitty gritty details and numerous examples on the need to regenerate.
Summary of this entire thread including both problems addressed:
http://thebuildingcoder.typepad.com/blog/2016/12/need-for-regen-and-parameter-display-name-confusion.html
So this issue actually had nothing to do with transactions or doc regeneration. I discovered (with some help :) ), that the problem lied in how I was setting/getting the parameter. "Detail Number", like a lot of parameters, has duplicate versions that share the same descriptive param Name in a viewport element.
Apparently the reason for this might be legacy issues, though im not sure. Thus, when I was trying to get/set detail number, it was somehow grabbing the incorrect read-only parameter occasionally, one that is called "VIEWER_DETAIL_NUMBER" as its builtIn Enumeration. The correct one is called "VIEWPORT_DETAIL_NUMBER". This was happening because I was trying to get the param just by passing the descriptive param name "Detail Number".Revising how i get/set parameters via builtIn enum resolved this issue. See images below.
Please see pdf for visual explanation: https://www.docdroid.net/WbAHBGj/161206-detail-number.pdf.html
I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel
To all those who have had experience with using the crf++ toolkit (refer: http://crfpp.sourceforge.net/)
Please find the error message which pops up on trying to execute the CRF++ training program:
CRF++: Yet Another CRF Tool Kit
Copyright (C) 2005-2009 Taku Kudo, All rights reserved.
encoder.cpp(280) [feature_index.open(templfile, trainfile)] feature_index.cpp(86) [max_size == size] inconsistent column size: 21 20 train.data
I'm not sure how to interpret the error message.
There are 20 features in my training file and the 21st token is the class value.
I have created the Crf++ template file as per the instructions on the site.
It looks like a training data format issue, make sure the number of columns are consistent across all sentences.
I got this error today, and found that crf++ toolkit just set tab character(\t) to default column separator whereas my train data file using one white space lead to error.
Some points to check:
1. Check if you have a new line after each sentence
2. Check if your columnar values does not contain any sp
Error suggests that your number of columns in rows are not same among all. Your maximum number of columns are 21 and that should be consistent through out the training file but crf_learn finds it 20 somewhere in your train.data training file. So locate such row and remove/repair it.
I am using Cucumber with RubyMine, and I have a scenario with steps that verify some special controls from a form (I am using cucumber for automation testing). The controls don't have anything to do with each other, and there is no reason for the steps to be skipped if one in front of them fails.
Does anyone know what configurations or commands should I use to run all the steps in a scenario even if they all fail?
I think the only way to achieve desired behavior (which is quite uncommon) is to define custom steps and catch exceptions in it yourself. According to cucumber wiki step is failed if it raises an error. Almost all default steps raise error if they can't find or interact with an element on the page. If you'll catch this exceptions the step will be marked as passed, but in rescue you can provide custom output. Also I recommend you to carefully define exceptions you want to catch, I think if you're Ok if selenium can't find an element on the page rescue only from ElementNotFound exceptions, don't catch all exceptions.
I've seen a lot of threads on the Web about people wanting to continue steps execution if one failed.
I've discussed with Cucumber developers: they think this is a bad idea: https://groups.google.com/forum/#!topic/cukes/xTqSyR1qvSc
Many times, scenarios can be reworked to avoid this need: scenarios must be split into several smaller and independent scenarios, or several checks can be aggregated into one, providing a more human scenario and a less script-like scenario.
But if you REALLY need this feature, like our project do, we've done a fork of Cucumber-JVM.
This fork let you annotate steps so that when they fail with a determined exception, they will let let next steps execute anyway (and the step itself is marked as failed).
The fork is available here:
https://github.com/slaout/cucumber-jvm/tree/continue-next-steps-for-exceptions-1.2.4
It's published on the OSSRH Maven repository.
See the README.md for usage, explanation screenshot and Maven dependency.
It's only available for the Java language, tough: any help is welcome to adapt the code to Ruby, for instance. I don't think it will be a lot of work.
The question is old, but hopefully this will be helpful. What I'm doing feels kind of "wrong", but it works. In your web steps, if you want to keep going, you have to catch exceptions. I'm doing that primarily to add helpful failure messages. I'm checking a table full of values that are identified in Cucumber with a table having a bunch of rows like:
Then my result should be:
| Row Identifier | Column Identifier | Subcolum Identifier | $1,247.50 |
where the identifiers make sense in the application domain, and name a specific cell in the results table in a human-friendly way. I have helpers that convert the human identifiers to DOM IDs, which are used to first check whether the row I'm looking for exists at all, then look for the specific value in a cell in that row. The default failure message for a missing row is clear enough for me (expected to find css "tr#my_specific_dom_id" but there were no matches). But the failure message for checking specific text in a cell is completely unhelpful. So I made a step that catches the exception and uses the Cucumber step info and some element searching to get a good failure message:
Then /^my application domain results should be:$/ do |table|
table.rows.each do |row|
row_id = dom_id_for(row[0])
cell_id = dom_id_for(row[0], row[1], row[2])
page.should have_css "tr##{row_id}"
begin
page.should have_xpath("//td[#id='#{cell_id}'][text()=\"#{row[3].strip.lstrip}\"]")
rescue Capybara::ExpectationNotMet => exception
# find returns a Capybara::Element, native returns a Selenium::WebDriver::Element
contents = find(:xpath, "//td[#id='#{cell_id}']").native.text
puts "Expected #{ row[3] } for #{ row[0,2].join(' ') } but found #{ contents } instead."
#step_failures_were_rescued = true
end
end
end
Then I define a hook in features/support/hooks.rb like:
After do |scenario|
unless scenario.failed?
raise Capybara::ExpectationNotMet if #step_failures_were_rescued
end
end
This makes the overall scenario fail, but it masks the step failure from Cucumber, so all the step results are green, including the ones that aren't right. You have to see the scenario failure, then look back at the messages to see what failed. This seems kind of "bad" to me, but it works. It's WAY more convenient in my case to get the expected and found values listed in a domain-friendly context for the whole table I'm checking, rather than to get a message like "I looked for "$123.45" but I couldn't find it." There might be a better way to do this using the Capybara "within" method. This is the best I've come up with so far though.