Apache Beam Metrics Counter giving incorrect count using SparkRunner - apache-spark

I am having source and target csv files with 10 million records with 250 columns.
I am running an apache beam pipeline which joins all columns from source and target file.
When, I run this on spark cluster the pipeline executes correctly with no exceptions but,
The join beam metrics counter returns double count when the following spark property is used.
-- executor-memory "2g"
But, When I increase the excutor-memory to 11g then it returns the correct count.
I have tried following example,
Pipeline pipeline = Pipeline.create(options);
final TupleTag<String> eventInfoTag = new TupleTag<>();
final TupleTag<String> countryInfoTag = new TupleTag<>();
PCollection<KV<String, String>> eventInfo =
eventsTable.apply(ParDo.of(new ExtractEventDataFn()));
PCollection<KV<String, String>> countryInfo =
countryCodes.apply(ParDo.of(new ExtractCountryInfoFn()));
PCollection<KV<String, CoGbkResult>> kvpCollection =
KeyedPCollectionTuple.of(eventInfoTag, eventInfo)
.and(countryInfoTag, countryInfo)
.apply(CoGroupByKey.create());
PCollection<KV<String, String>> finalResultCollection =
kvpCollection.apply(
"Process",
ParDo.of(
new DoFn<KV<String, CoGbkResult>, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
KV<String, CoGbkResult> e = c.element();
String countryCode = e.getKey();
String countryName = "none";
countryName = e.getValue().getOnly(countryInfoTag);
for (String eventInfo : c.element().getValue().getAll(eventInfoTag)) {
Metrics.counter("count", "errorcount").inc();
c.output(
KV.of(
countryCode,
"Country name: " + countryName + ", Event info: " + eventInfo));
}
}
}));
final PipelineResult result = pipeline.run();
MetricQueryResults metrics =
result
.metrics()
.queryMetrics(
MetricsFilter.builder()
.addNameFilter(MetricNameFilter.inNamespace("count"))
.build());
Iterable<MetricResult<Long>> counters = metrics.getCounters();
for (MetricResult<Long> counter : counters) {
System.out.println("Hi >> "+counter.getName().getName() + " : " + counter.getAttempted() + " " + counter.getCommittedOrNull());
}
I need help with this.
Thank you

public static void main(String[] args) {
Configuration hadoopConf = new Configuration();
hadoopConf.set("fs.defaultFS", args[13]);
hadoopConf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
hadoopConf.set("fs.file.impl",org.apache.hadoop.fs.LocalFileSystem.class.getName());
final TupleTag<Row> sourceDataInfoTag = new TupleTag<Row>(){};
final TupleTag<Row> targetDataInfoTag = new TupleTag<Row>(){};
HadoopFileSystemOptions options = PipelineOptionsFactory.as(HadoopFileSystemOptions.class);
options.setRunner(SparkRunner.class);
options.setHdfsConfiguration(Collections.singletonList(hadoopConf));
Pipeline pipeline = Pipeline.create(options);
PCollection<String> sourceData = pipeline.apply(TextIO.read().from(args[14]).withDelimiter("\n".getBytes()));
PCollection<KV<Row, Row>> sourceDataRows = sourceData.apply(ParDo.of(new ExtractFunction()));
PCollection<String> targetData = pipeline.apply(TextIO.read().from(args[23]).withDelimiter("\n".getBytes()));
PCollection<KV<Row, Row>> targetDataRows = targetData.apply(ParDo.of(new ExtractFunction()));
PCollection<KV<Row, CoGbkResult>> kvpCollection = KeyedPCollectionTuple
.of(sourceDataInfoTag, sourceDataRows.setCoder(KvCoder.of(RowCoder.of(SOURCE_JOIN_RECORD_TYPE),RowCoder.of(SOURCE_RECORD_TYPE))))
.and(targetDataInfoTag, targetDataRows.setCoder(KvCoder.of(RowCoder.of(TARGET_JOIN_RECORD_TYPE),RowCoder.of(TARGET_RECORD_TYPE))))
.apply(CoGroupByKey.<Row>create());
PCollection<GenericRecord> finalResultCollections = kvpCollection.apply("process",ParDo.of(new DoFn<KV<Row, CoGbkResult>, GenericRecord>() {
#ProcessElement
public void processElement(ProcessContext context) {
KV<Row, CoGbkResult> element = context.element();
Iterator<Row> srcIter = element.getValue().getAll(sourceDataInfoTag).iterator();
Iterator<Row> trgIter = element.getValue().getAll(targetDataInfoTag).iterator();
Metrics.counter("count", "count").inc();
GenericRecordBuilder builder = new GenericRecordBuilder(SCHEMA);
boolean done = false;
boolean captureError = false;
while (!done)
{
// Some iterator data here.
.
.
builder.set(colName, data);
if(captureError){
GenericRecord record = builder.build();
context.output(record);
}
}
}
})).setCoder(AvroCoder.of(GenericRecord.class, SCHEMA));
finalResultCollections.apply("writeText",FileIO.<GenericRecord>write()
.via(ParquetIO.sink(SCHEMA))
.withSuffix(".parquet")
.withPrefix("part")
.to("hdfs://temp/"));
final PipelineResult result = pipeline.run();
State state = result.waitUntilFinish();
MetricQueryResults metrics =
result
.metrics()
.queryMetrics(
MetricsFilter.builder()
.addNameFilter(MetricNameFilter.inNamespace("count"))
.build());
Iterable<MetricResult<Long>> counters = metrics.getCounters();
for (MetricResult<Long> counter : counters) {
System.out.println("Count >> "+counter.getName().getName() + " : " + counter.getAttempted() + " " + counter.getCommittedOrNull());
}
}

In your code, when you do Metrics.counter("count", "errorcount") you define the counter. But it is defined in a loop which is also in sort of a loop (processElement). You should define your counter as a field in the DoFn. Don't worry the DoFn is reused for processing the bundle. Such as: private final Counter counter = Metrics.counter(MyClass.class, COUNTER_NAME);
Also you showed only part of the code but I don't see done boolean set to true. But that is just out of curiosity.
And last but not least, you should try spark runner on master branch of Beam because there was a fix merged yesterday about metrics (metrics not reset when running multiple pipelines inside the same JVM). I don't know if it matches your use case but it's worth trying.

Related

Creating a Adjustment before Processing a (pick,pack,ship) scan

I am trying to create an Adjustment to my stock as I am scanning the item.
The code works and the adjustment is released as expected. The problem is the data is not submitted to SQL before the base method is invoked. Is there a way to force the data to be pushed to SQL before the base method is called?
I have tried adding a PXTransactionScope but after the ts.complete the data on SQL has not been updated
[PXOverride]
public virtual void ProcessLotSerialBarcode(string barcode, ProcessLotSerialBarcodeDelegate baseMethod)
{
string LotnumberWithOutPrf =// code to extract Lotnumber from QR code
//check if lot has stock
INItemLotSerial iNItemLotSerial = PXSelect<
INItemLotSerial,
Where<INItemLotSerial.lotSerialNbr, Equal<Required<INItemLotSerial.lotSerialNbr>>,
And<INItemLotSerial.inventoryID, Equal<Required<INItemLotSerial.inventoryID>>>>>
.Select(Base, Lotnumber, Base.HeaderView.Current.InventoryID);
//if there is no stock do adjustment
if (iNItemLotSerial.QtyHardAvail == 0)
{
using (PXTransactionScope ts = new PXTransactionScope())
{
INAdjustmentEntry iNAdjustmentEntryEntry = PXGraph.CreateInstance<INAdjustmentEntry>();
INRegister iNRegister = new INRegister();
iNRegister.ExtRefNbr = "N/A";
iNRegister.TranDesc = "Adjustment for Shipment: " + Base.HeaderView.Current.RefNbr;
iNRegister.Hold = false;
iNAdjustmentEntryEntry.adjustment.Insert(iNRegister);
INTran iNTran = iNAdjustmentEntryEntry.transactions.Insert();
iNTran.InventoryID = Base.HeaderView.Current.InventoryID;
iNTran.SiteID = 9;
iNTran.LocationID = 77;
iNTran.Qty = 1;
iNTran.UOM = Base.HeaderView.Current.UOM;
iNAdjustmentEntryEntry.transactions.Update(iNTran);
iNTran.LotSerialNbr = LotnumberWithOutPrf;
iNAdjustmentEntryEntry.transactions.Update(iNTran);
iNAdjustmentEntryEntry.Save.PressButton();
iNAdjustmentEntryEntry.release.Press();
iNAdjustmentEntryEntry.Save.PressButton();
iNAdjustmentEntryEntry.Cancel.PressButton();
ts.Complete(Base);
}
}
baseMethod?.Invoke(LotnumberWithOutPrf);
}

How can I save a custom field on ARCashSale after the Release process is complete?

When I release a Cash Sales document, after everything verifies successfully and the GL Batch is created, I execute code which created a second GL Batch that handles other transactions related to the cash sale. After that batch is created, I want to save the second GL Batch's Ref Nbr on the Cash Sales document as well. When I try to save it, I get an error:
Here is my code overriding the normal Release process:
public delegate IEnumerable ReleaseDelegate(PXAdapter adapter);
[PXOverride]
public IEnumerable Release(PXAdapter adapter, ReleaseDelegate baseMethod)
{
ARCashSale cashSale = Base.Document.Current;
PXGraph.InstanceCreated.AddHandler<JournalEntry>(delegate (JournalEntry oldJournalEntry)
{
oldJournalEntry.RowPersisted.AddHandler<Batch>(delegate (PXCache sender, PXRowPersistedEventArgs e)
{
Batch oldBatch = oldJournalEntry.BatchModule.Current;
if (oldBatch != null && isCreated == false && e.Operation == PXDBOperation.Insert && e.TranStatus == PXTranStatus.Completed)
{
isCreated = true;
if (CFBSAdjustments.Select().Count > 0)
{
JournalEntry newJournalEntry = PXGraph.CreateInstance<JournalEntry>();
Batch newBatch = new Batch();
newBatch = newJournalEntry.BatchModule.Insert(newBatch);
Customer customer = PXSelect<Customer, Where<Customer.bAccountID, Equal<Required<Customer.bAccountID>>>>.Select(Base, cashSale.CustomerID);
newBatch.Description = "Fund Entry for Cash Sales Reference " + cashSale.RefNbr;
newBatch.FinPeriodID = oldBatch.FinPeriodID;
newBatch.LedgerID = oldBatch.LedgerID;
newBatch.DateEntered = oldBatch.DateEntered;
decimal? debit = 0;
decimal? credit = 0;
foreach (CFBSCashSalesAdjustment row in CFBSAdjustments.Select())
{
GLTran tran = newJournalEntry.GLTranModuleBatNbr.Insert();
tran.SummPost = true;
tran.BranchID = cashSale.BranchID;
tran.TranType = GLTran.tranClass.Normal;
tran.TranClass = GL.Messages.RoundingDiff;
tran.LedgerID = newBatch.LedgerID;
tran.FinPeriodID = newBatch.FinPeriodID;
tran.TranDate = newBatch.DateEntered;
tran.CuryInfoID = Base.currencyinfo.Current.CuryInfoID;
tran.AccountID = row.Account;
tran.SubID = row.Subaccount;
tran.DebitAmt = row.DebitAmt;
tran.CuryDebitAmt = row.DebitAmt;
debit += row.DebitAmt;
tran.CreditAmt = row.CreditAmt;
tran.CuryCreditAmt = row.CreditAmt;
credit += row.CreditAmt;
tran.RefNbr = row.CashSalesRefNbr;
tran.TranDesc = customer.AcctCD + " - " + customer.AcctName;
newJournalEntry.GLTranModuleBatNbr.Update(tran);
}
newBatch = newJournalEntry.BatchModule.Update(newBatch);
if (GLSetup.Current.GetExtension<GLSetupExt>().UsrAutoRelease == true)
{
newJournalEntry.BatchModule.Current.Hold = false;
newJournalEntry.release.Press();
}
newJournalEntry.Save.Press();
if (isCreated)
isCreated = false;
cashSale.GetExtension<ARRegisterExt>().UsrFundBatch = newJournalEntry.BatchModule.Current.BatchNbr;
//Base.Document.Current.GetExtension<ARRegisterExt>().UsrFundBatch = newJournalEntry.BatchModule.Current.BatchNbr;
//Base.dummy_CATran.View.RequestRefresh();
Base.Document.Update(cashSale);
//TODO - Figure out why the fund batch isn't saving to the Cash Sale
Base.Actions.PressSave();
//Base.dummy_CATran.Cache.ClearQueryCache();
//Base.Persist(typeof(ARCashSale), PXDBOperation.Update);
}
}
});
});
return baseMethod(adapter);
}
I left in all of the different methods I've tried to get the ref nbr to save. I've ever tried to add a field updated handler for the BatchNbr field and force the new number in that way, but it did not work.
EDIT: I also noticed that I cannot successfully set an extended field on the Batch DAC. For example, I have the line newBatch.GetExtension<BatchExt>().ExtRefNbr = cashSale.RefNbr; to set the ref nbr in that document as well, but any time I run 'NewJournal.Press.Save()' it changes the set value of the extended field to null. If anyone knows how to set the extension field, I may be able to work with that instead to go down a different path that may do what I need.
I believe the appropriate place to perform your update is in graph ARDocumentRelease. Try something like this....
public class ARDocumentReleaseExtension : PXGraphExtension<ARDocumentRelease>
{
public override void Initialize()
{
ARSetup setup = Base.arsetup.Current;
Base.ARDocumentList.SetProcessDelegate(
delegate (List<BalancedARDocument> list)
{
List<ARRegister> newlist = new List<ARRegister>(list.Count);
foreach (BalancedARDocument doc in list)
{
newlist.Add(doc);
}
ARDocumentRelease.ReleaseDoc(newlist, true);
CreateSecondBatch(newlist);
}
);
Base.ARDocumentList.SetProcessCaption(PX.Objects.AR.Messages.Release);
Base.ARDocumentList.SetProcessAllCaption(PX.Objects.AR.Messages.ReleaseAll);
}
private void CreateSecondBatch(List<ARRegister> docs)
{
foreach(BalancedARDocument register in docs)
{
if (register.DocType == ARDocType.CashSale)
{
//create the second batch and assign the 2nd ref nbr to the cash sale user field
}
}
}
}

Upload CSV to BigQuery in C#

Basically what I want to do is to submit a job to BigQuery (Asynchronously), check job status after and print out corresponding status info or error info. I created a frame as below. But I need help on:
GoogleApiException: Not Found Job exception when "BigQueryService.Jobs.Get(jobReference.ProjectId, jobReference.JobId).Execute()" was called. My gut is that the job wasn't submit correctly, but I don't know how to do it correctly.
How should I handle GoogleApiExceptions?
Firt step: create a Job (uploading CSV file into BigQuery), return the JobReference
TableReference DestTable = new TableReference();
DestTable.ProjectId = project;
DestTable.DatasetId = dataset;
DestTable.TableId = tableId;
Job Job = new Job();
JobConfiguration Config = new JobConfiguration();
JobConfigurationLoad ConfigLoad = new JobConfigurationLoad();
ConfigLoad.Schema = schema;
ConfigLoad.DestinationTable = DestTable;
ConfigLoad.Encoding = "ISO-8859-1";
ConfigLoad.CreateDisposition = "CREATE_IF_NEEDED";
ConfigLoad.WriteDisposition = createDisposition;
ConfigLoad.FieldDelimiter = delimiter.ToString();
ConfigLoad.AllowJaggedRows = true;
Config.Load = ConfigLoad;
Job.Configuration = Config;
//set job reference (mainly job id)
JobReference JobRef = new JobReference();
JobRef.JobId = GenerateJobID("Upload");
JobRef.ProjectId = project;
Job.JobReference = JobRef;
using(FileStream fileStream = new FileStream(filePath,FileMode.Open)){
var JobInfo = BigQueryService.Jobs.Insert(Job,project,fileStream,"text/csv");//application/octet-stream
JobInfo.UploadAsync();
Console.WriteLine(JobInfo.GetProgress().Status.ToString());
}
return JobRef;
Then, Pull Job status using projectId and jobId in the returned JobReference from the first step:
while (true)
{
pollJob = BigQueryService.Jobs.Get(jobReference.ProjectId, jobReference.JobId).Execute();
i = 0;
Console.WriteLine("Job status" + jobReference.JobId + ": " + pollJob.Status.State);
if (pollJob.Status.State.Equals("DONE"))
{
return pollJob;
}
// Pause execution for pauseSeconds before polling job status again,
// to reduce unnecessary calls to the BigQuery API and lower overall
// application bandwidth.
Thread.Sleep(pauseSeconds * 1000);
}
There's hardly any useful sample code out there showing how to upload a local CSV file to Bigquery table. I eventually get something to work. It might not be the best solution, but it at least works. It's open to any improvement.
private JobReference JobUpload(string project, string dataset, string tableId, string filePath, TableSchema schema, string createDisposition, char delimiter)
{
TableReference DestTable = new TableReference();
DestTable.ProjectId = project;
DestTable.DatasetId = dataset;
DestTable.TableId = tableId;
Job Job = new Job();
JobConfiguration Config = new JobConfiguration();
JobConfigurationLoad ConfigLoad = new JobConfigurationLoad();
ConfigLoad.Schema = schema;
ConfigLoad.DestinationTable = DestTable;
ConfigLoad.Encoding = "ISO-8859-1";
ConfigLoad.CreateDisposition = "CREATE_IF_NEEDED";
ConfigLoad.WriteDisposition = createDisposition;
ConfigLoad.FieldDelimiter = delimiter.ToString();
ConfigLoad.AllowJaggedRows = true;
ConfigLoad.SourceFormat = "CSV";
Config.Load = ConfigLoad;
Job.Configuration = Config;
//set job reference (mainly job id)
JobReference JobRef = new JobReference();
JobRef.JobId = GenerateJobID("Upload");
JobRef.ProjectId = project;
Job.JobReference = JobRef;
using(FileStream fileStream = new FileStream(filePath,FileMode.Open)){
JobsResource.InsertMediaUpload InsertMediaUpload = new JobsResource.InsertMediaUpload(BigQueryService,Job,Job.JobReference.ProjectId,fileStream,"application/octet-stream");
var JobInfo = InsertMediaUpload.UploadAsync();
Console.WriteLine(JobInfo.Status);
while (!JobInfo.IsCompleted)
{
//wait for the job to be activated and run
Console.WriteLine(JobInfo.Status);
}
}
return JobRef;
}
After this, you can actually use the returned JobRef to pull job status, almost the same as we do with Java API:
while(true)
{
PollJob = BigQueryService.Jobs.Get(jobReference.ProjectId, jobReference.JobId).Execute();
Console.WriteLine("Job status" + jobReference.JobId + ": " + PollJob.Status.State);
if (PollJob.Status.State.Equals("DONE"))
{
return PollJob;
}
}

Converting Hand Written DI to Windsor Provided DI

For the past six or seven months I have been doing DI in some of my components as result they have grown to become little bit of complicated. In the past I have been creating Object graphs with hand written Factories. Since it is becoming unmanageable I am trying to move that code to Framework dependent DI(by code and not by some XML files). I am posting my Code as well as issues I am stuck with.
Here is my composition layer (it is big, so bear with me :) ):
IAgentFactory GetAgentFactory()
{
string errorMessage;
IDictionary<AgentType, ServiceParameters> agentFactoryPrerequisite = new Dictionary<AgentType, ServiceParameters>();
string restResponseHeaderStatus = MyConfigurationProject.GetConfigValue("RestResponseHeaderStatus", out errorMessage);
var service1Parameters = new ServiceParameters();
service1Parameters.BindingName = MyConfigurationProject.GetConfigValue("Service1WebHttpBindingConfiguration", out errorMessage).ToString();
service1Parameters.HeaderPassword = MyConfigurationProject.GetConfigValue("Service1HeaderPassword", out errorMessage).ToString();
service1Parameters.HeaderUserName = MyConfigurationProject.GetConfigValue("Service1HeaderUserName", out errorMessage).ToString();
service1Parameters.ResponseHeaderStatus = restResponseHeaderStatus;
service1Parameters.ServicePassword = MyConfigurationProject.GetConfigValue("Service1ServicePassword", out errorMessage).ToString();
service1Parameters.ServiceUrl = MyConfigurationProject.GetConfigValue("Service1URL", out errorMessage).ToString();
service1Parameters.ServiceUserName = MyConfigurationProject.GetConfigValue("Service1ServiceUserName", out errorMessage).ToString();
agentFactoryPrerequisite.Add(new KeyValuePair<AgentType, ServiceParameters>(AgentType.Service1, service1Parameters));
var agentFactory = new AgentFactory(agentFactoryPrerequisite);
return agentFactory;
}
protected DatalayerSettings GetDataLayerSettings()
{
var datalayerSettings = new DatalayerSettings();
datalayerSettings.ConnectionString = ConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString;
datalayerSettings.MySchemaName = ConfigurationManager.AppSettings["MyDatabaseSchema"];
datalayerSettings.UpdatingUser = "Admin";
return datalayerSettings;
}
PostgersDAFactory GetPostGresDaFactory()
{
var datalayerSettings = GetDataLayerSettings();
return new PostgersDAFactory(datalayerSettings, "MyAssembly.PostgresDA", "MyDifferentAssembly.CommonDatalayer", "MyServiceLogPath");
}
public class PostgersDAFactory
{
readonly DatalayerSettings _datalayerSettings;
readonly string _assemblyName;
readonly string _logPath;
readonly string _mySecondAssemblyName;
public PostgersDAFactory(DatalayerSettings datalayerSettings, string assemblyName, string mySecondAssemblyName, string logPath)
{
_datalayerSettings = datalayerSettings;
_assemblyName = assemblyName;
_logPath = logPath;
_commonDaAssemblyName = commonDaAssemblyName;
}
public IDA1 GetDA1Instance()
{
var type1 = Type.GetType("MyAssembly.PostgresDA.ClassRealisingImplementation_For_DA1," + _assemblyName);
return (IDA1)Activator.CreateInstance(type1, _datalayerSettings, _logPath);
}
public IDA2 GetDA2Instance()
{
var type1 = Type.GetType("MyAssembly.PostgresDA.ClassRealisingImplementation_For_DA2," + _assemblyName);
return (IDA2)Activator.CreateInstance(type1, _datalayerSettings);
}
public IDA3 GetDA3Instance()
{
var type1 = Type.GetType("MyAssembly2.ClassRealisingImplementation_For_DA3," + _commonDaAssemblyName);
return (IDA3)Activator.CreateInstance(type1, _datalayerSettings);
}
}
public BaseFileHandler GetFileHandler(FileProvider fileprovider, MockedServiceCalculator mockedServicecalculator = null)
{
string errorMessage;
var postgresFactory = GetPostGresDaFactory();
var Da1Instance = postgresFactory.GetDA1Instance();
var fileSyncBusiness = new FileSyncBusiness(Da1Instance);
var interfaceConfiguratonParameters = fileSyncBusiness.GetInterfaceConfigurationParameters();
var servicePointDetailsSettings = new ServicePointDetailsSettings();
var nullDate = new DateTime(2099, 1, 1);
CommonValidations commonValidations;
if (mockedServicecalculator == null)
{
commonValidations = GetStubbedCommonValidations(nullDate);
}
else
{
commonValidations = GetCommonValidations_WithMockedServiceCalculator(nullDate, mockedServicecalculator);
}
switch (fileprovider)
{
case FileProvider.Type1:
var type1Adapter = new Type1Adaptor(false, nullDate);
servicePointDetailsSettings = GetUtiltaParameters(interfaceConfiguratonParameters);
return new Type1FileHandler(servicePointDetailsSettings, fileSyncBusiness, commonValidations, type1Adapter);
case FileProvider.Type2:
var type2Adapter = new Type2Adaptor(true, nullDate);
servicePointDetailsSettings.ApplicableParameters = MyApplicationCommonMethods.ConvertConfigurationTableToDictonary(interfaceConfiguratonParameters, "applicableintype2");
servicePointDetailsSettings.BadFileLocation = MyConfigurationProject.GetConfigValue("Type2BadFileLocation", out errorMessage);
servicePointDetailsSettings.DateFormat = MyConfigurationProject.GetConfigValue("Type2DateFormat", out errorMessage);
servicePointDetailsSettings.FailureFileLocation = MyConfigurationProject.GetConfigValue("Type2FailureFile", out errorMessage);
servicePointDetailsSettings.LogFileName = "Type2LogFile";
servicePointDetailsSettings.LogPath = MyConfigurationProject.GetConfigValue("Type2ErrorLog", out errorMessage);
servicePointDetailsSettings.MandatoryParameters = MyApplicationCommonMethods.GetDictonaryForMandatoryParameters(interfaceConfiguratonParameters, "applicableintype2", "mandatoryintype2");
servicePointDetailsSettings.SourceFileLocation = MyConfigurationProject.GetConfigValue("type2FileLocation", out errorMessage);
servicePointDetailsSettings.SuccessFileLocation = MyConfigurationProject.GetConfigValue("type2SuccessFile", out errorMessage);
servicePointDetailsSettings.TargetFileExtension = MyConfigurationProject.GetConfigValue("type2SupportedFileType", out errorMessage);
servicePointDetailsSettings.Type2RecordTag = MyConfigurationProject.GetConfigValue("MyApplicationtype2RecordTag", out errorMessage);
return new Type2FileHandler(servicePointDetailsSettings, fileSyncBusiness, commonValidations, type2Adapter);
default:
throw new NotImplementedException("FileProvider type: " + Convert.ToInt32(fileprovider) + " is not implemented");
}
}
}
While moving towards Windsor, I am facing several issues, as I have never used this product, it seems it is very complicated.
Issues:
How to pass parameters to object when they have parameterised
constructors?
I know there is a better way to write this "PostgersDAFactory"
class, but simply don't know.
There are some Factory methods Such as GetAgentFactory(), which are
dependent on some static method of other project, which in turn
gives me configuration values(I ahd to store them in the database),
another method GetDataLayerSettings is dependent on app config as
well as some static string.
I am likely to change parameter names in my classes in order to
promote readability, so how to turn on the logging for Windsor?
Finally another complicated method GetFileHandler, has some logic
(switch case).
I have tried going on there website but I found it totally difficult to digest information, there API is huge and learning curve seems to be mammoth.
Note: I had to change the variable names due to security reasons.

Issue with SqlScalar<T> and SqlList<T> when calling stored procedure with parameters

The new API for Servicestack.OrmLite dictates that when calling fx a stored procedure you should use either SqlScalar or SqlList like this:
List<Poco> results = db.SqlList<Poco>("EXEC GetAnalyticsForWeek 1");
List<Poco> results = db.SqlList<Poco>("EXEC GetAnalyticsForWeek #weekNo", new { weekNo = 1 });
List<int> results = db.SqlList<int>("EXEC GetTotalsForWeek 1");
List<int> results = db.SqlList<int>("EXEC GetTotalsForWeek #weekNo", new { weekNo = 1 });
However the named parameters doesn't work. You HAVE to respect the order of the parameters in the SP. I think it is because the SP is executed using CommandType=CommandType.Text instead of CommandType.StoredProcedure, and the parameters are added as dbCmd.Parameters.Add(). It seems that because the CommandType is Text it expects the parameters to be added in the SQL querystring, and not as Parameters.Add(), because it ignores the naming.
An example:
CREATE PROCEDURE [dbo].[sproc_WS_SelectScanFeedScanRecords]
#JobNo int = 0
,#SyncStatus int = -1
AS
BEGIN
SET NOCOUNT ON;
SELECT
FSR.ScanId
, FSR.JobNo
, FSR.BatchNo
, FSR.BagNo
, FSR.ScanType
, FSR.ScanDate
, FSR.ScanTime
, FSR.ScanStatus
, FSR.SyncStatus
, FSR.JobId
FROM dbo.SCAN_FeedScanRecords FSR
WHERE ((FSR.JobNo = #JobNo) OR (#JobNo = 0) OR (ISNULL(#JobNo,1) = 1))
AND ((FSR.SyncStatus = #SyncStatus) OR (#SyncStatus = -1) OR (ISNULL(#SyncStatus,-1) = -1))
END
When calling this SP as this:
db.SqlList<ScanRecord>("EXEC sproc_WS_SelectScanFeedScanRecords #SyncStatus",new {SyncStatus = 1});
It returns all records with JobNo = 1 instead of SyncStatus=1 because it ignores the named parameter and add by the order in which they are defined in the SP.
I have to call it like this:
db.SqlList<ScanRecord>("EXEC sproc_WS_SelectScanFeedScanRecords #SyncStatus=1");
Is this expected behavior? I think it defeats the anonymous type parameters if I can't trust it
TIA
Bo
My solution was to roll my own methods for stored procedures. If people finds them handy, I could add them to the project
public static void StoredProcedure(this IDbConnection dbConn, string storedprocedure, object anonType = null)
{
dbConn.Exec(dbCmd =>
{
dbCmd.CommandType = CommandType.StoredProcedure;
dbCmd.CommandText = storedprocedure;
dbCmd.SetParameters(anonType, true);
dbCmd.ExecuteNonQuery();
});
}
public static T StoredProcedureScalar<T>(this IDbConnection dbConn, string storedprocedure, object anonType = null)
{
return dbConn.Exec(dbCmd =>
{
dbCmd.CommandType = CommandType.StoredProcedure;
dbCmd.CommandText = storedprocedure;
dbCmd.SetParameters(anonType, true);
using (IDataReader reader = dbCmd.ExecuteReader())
return GetScalar<T>(reader);
});
}
public static List<T> StoredProcedureList<T>(this IDbConnection dbConn, string storedprocedure, object anonType = null)
{
return dbConn.Exec(dbCmd =>
{
dbCmd.CommandType = CommandType.StoredProcedure;
dbCmd.CommandText = storedprocedure;
dbCmd.SetParameters(anonType, true);
using (var dbReader = dbCmd.ExecuteReader())
return IsScalar<T>()
? dbReader.GetFirstColumn<T>()
: dbReader.ConvertToList<T>();
});
}
They are just modified versions of the SqlScalar and SqlList plus the ExecuteNonQuery

Resources