I am having a problem with saving of data because of an incorrectly generated parameter name.
The table has a field "E-mail", and when the class wrapper is generated, the InsertCmd uses "#E-mail" as one of the parameters. In SQL Server, this is illegal and generated an exception.
I have hunted all over SubSonic for a way to modify the parameter name to simply "#Email" but the ParameterName property is read only.
I am using SubSonic 2.2 and don't have the source for it to make an internal modification.
Any ideas?
TIA
I got a mate of mine that uses SVN to pull the source code, and as expected, found a bug in the SS source.
When the column name is set in the class wrapper, the setter for the ColumnName property sets the ParamaterName property for you correctly using
"parameterName = Utility.PrefixParameter(Utility.StripNonAlphaNumeric(columnName), Table.Provider);". This removes any illegal characters like the hyphen in my E-mail column.
BUT... The property ParameterName is NOT used when the SQL commands are created. Here is the code in SQLDataProvider.GetInsertSQL, line 1462.
pars.Append(Utility.PrefixParameter( colName, this));
I changed this to
pars.Append(col.ParameterName);
and the problem is now sorted.
Thanks to you that came up with possible solutions.
You can modify the templates if you can't change the column name. See this blog post for details of how:
http://johnnycoder.com/blog/2008/06/09/custom-templates-with-subsonic/
Related
I have written an ETL in Talend Open Studio that loads a CSV/TSV file in a database. To do so, I want to provide the delimiter in tFileInputDelimited component using dynamic context load from a text file. I have specified it in the context file as fieldDelimiter="\t" and in the tFileInputDelimited component as shown in the screenshot. But, it doesn't work as a delimiter. I have also tried using fieldDelimiter="\\t" or fieldDelimiter="\u0009" (unicode character for tab).
What should I provide in the context file so that the delimiter is a tab character and not "\t" string as is happening in this case?
I notice a difference in the context variable names. In the screen shot you have mentioned (String)context.get("fileDelimiter"). But in the text you are saying "I have specified it in the context file as fieldDelimiter="\t" ".
just keeping the context as follows in the .properties file should work
fieldDelimiter=\t
Also use context.fieldDelimiter instead of (String)context.get("fileDelimiter").
In your context file, just put fileDelimiter = \t
(Without quotes)
And then access the variable in field delimiter. Talend will automatically handle it as string.
Hope this works.
There is no function (String)context.get("key") that I know of. If you have set the separator as a String element in the Context, just access it directly. Now there will be an empty String set as the field separator I suppose.
So if your field is called fileDelimiter simply put context.fileDelimiter into the Field Separator.
As pointed out by others, you should use context.ParamName syntax, the benefit of this method is syntax checking at compile time which eliminates the risk of typos in your variable names.
This parameter must be declared in your job (contexts tab) in order for Talend to recognize it. You can either create it as a built-in or import it if it's in the repository.
I am using a simple looping plugin so that my template looks like this:
{exp:loop_plus start="1" end="4" increment="1"}
<h3>{slide_{index}_title}</h3>
{/exp:loop_plus}
However, I am ending up with the following output:
<h3>{slide_1_title}</h3>
<h3>{slide_2_title}</h3>
<h3>{slide_3_title}</h3>
<h3>{slide_4_title}</h3>
Is there any way I can have dynamic variable names like this? I am not looking for alternative methods for building a slider, I simply would like to know if the dynamic variable names like this is possible. Thanks!
I'm assuming that Loop Plus (http://devot-ee.com/add-ons/loop-plus) sets the {index} part, so the question is what is defining {slide_1_title}...?
Assuming you have an entry field or variable with this defined, what you have is correct, but if it's not working, it means there's a parsing order issue.
Let's assume the code you supplied is wrapped in a {exp:channel:entries} tag pair, what happens is EE will try to parse the variable first, so will see: {slide_{index}_title} which doesn't exist. The {exp:loop_plus} add-on will then parse it, converting it to {slide_1_title} (but to late as channel:entries has already tried to parse it), which is what is finally output to the template.
So what you want to ensure is that EE parses {exp:loop_plus} before {exp:channel:entries}, do this using parse="inward" tag:
{exp:loop_plus start="1" end="4" increment="1" parse="inward"}
<h3>{slide_{index}_title}</h3>
{/exp:loop_plus}
This is a global EE parameter that EE uses to control parse order - you won't find it documented under the specific add-on. By adding the parameter, it means this child tag will get parsed before it's parent.
One way you could do it is to declare a preload_replace variable in your template and use it in your custom field name.
So something like:
{preload_replace:my_var_prefix="whatever"}
And then in your loop, you could then use:
{slide_{my_var_prefix}_title}
I am trying to do an XSL transform on an xml structure in a bpel assignment statement. There is a syntax problem, but I am having trouble finding official documentation. There are examples all over the internet but I have not found a clear explanation. Here is my best shot. What do the last two parameters do? Why is eclipse saying the first argument must be a literal, even though test3.xsl is a string?
<bpel:assign validate="yes" name="Assign">
<bpel:copy keepSrcElementName="no">
<bpel:from>
<![CDATA[bpel:doXslTransform("test3.xsl", $personalInfoServiceOutput.parameters), "middle", $positionSkillManagementInput]]>
</bpel:from>
<bpel:to variable="positionSkillManagementInput"></bpel:to>
</bpel:copy>
</bpel:assign>
The signature of doXSLTransform looks as follows:
object bpel:doXslTransform(string, node-set, (string, object)*)
The first parameter is the name of the XSLT script, the second parameter is an XPath identifying the source document (e.g. a variable, part, nodeset, node). The third and the fourth parameter is a key-value pair, the string is the key and the object is the value. Those pairs are mapped into the script's parameter context so that you can access these values by their name in the script. There can be any number of these pairs.
The best resource to look up such things is the WS-BPEL 2.0 specification, doXSLTransform is described in Sect. 8.4
When I use the following code :
<bpel:copy keepSrcElementName="no">
<bpel:from>
<![CDATA[bpel:doXslTransform("parseSample.xsl", $output.payload)]]>
</bpel:from>
<bpel:to variable="output"></bpel:to>
</bpel:copy>
I also get the error, that first argument must be literal string.
But, when I deploy my service (with error) to wso2 bps, it works fine.
You can try with this.
I faced the same issue. Agree with NGoyal. Shows error in BPEL but works when deployed.
I have an SSIS package that obtains a list of new GUIDs from a SQL table. I then shred the GUIDs into a string variable so that I have them separated out by comma. An example of how they appear in the variable is:
'5f661168-aed2-4659-86ba-fd864ca341bc','f5ba6d28-7283-4bed-9f11-e8f6bef225c5'
The problem is in the data flow task. I use the variable as a parameter in a SQL query to get my source data and I cannot get my results. When the WHERE clause looks like:
WHERE [GUID] IN (?)
I get an invalid character error so I found out the implicit conversion doesn't work with the GUIDs like I thought they would. I could resolve this by putting {} around the GUID if this were a single GUID but there are a potential 4 or 5 different GUIDs this will need to retrieve at runtime.
Figuring I could get around it with this:
WHERE CAST([GUID] AS VARCHAR(50)) IN (?)
But this simply produces no results and there should be two in my current test.
I figure there must be a way to accomplish this... What am I missing?
You can't, at least not using the mechanics you have provided.
You cannot concatenate values and make that work with a parameter.
I'm open to being proven wrong on this point but I'll be damned if I can make it work.
How can I make it work?
The trick is to just go old school and make your query via string building/concatenation.
In my package, I defined two variables, filter and query. filter will be the concatenation you are already performing.
query will be an expression (right click, properties: set EvaluateAsExpression to True, Expression would be something like "SELECT * FROM dbo.RefData R WHERE R.refkey IN (" + #[User::filter] + ")"
In your data flow, then change your source to SQL Command from variable. No mapping required there.
Basic look and feel would be like
OLE Source query
I'm just starting to use SubSonic 3 and I'm struggling with a basic insert operation.
I would like to insert a new row into a table with all columns and values as specified by an entity object. However the only examples I can find look like you have to specify each column and value, which to me isn't a big step up from doing the raw SQL insert!
myDB.Insert.Into<MyTable>(m => m.Col1, m=m.Col2, etc).Values(col1Val, col2Val,...)
I'm not using the ActiveRecord template which I know from 2.x can do this and there doesn't seem to a Repository.tt template with the version I downloaded (SubSonic.Core 3.0.0.3).
So is this possible?
Is there a Repository.tt template available for v3.0.0.3?
Thanks,
Canice.
You will not need a Repository.tt all you need is to instantiate an instance of SimpleRepository and call the Insert method which takes a type as generic argument and instance which is persisted.
You can take a look at the simple repository tests on github:
https://github.com/subsonic/SubSonic-3.0/blob/master/SubSonic.Tests/Repositories/SimpleRepositoryTests.cs