How can I generate several text files at the same time locally?
I am using the method:
throw new PXRedirectToFileException (file, true);
![enter image description here][1]
However, this method only generates 1 text file. I need more than 1 text file to be generated at a time.
List<object> data1099Misc = new List<object> { };
ARInvoice ari = Base.Document.Current;
foreach (xvrFSCab diot in PXSelect<xvrFSCab,
Where<xvrFSCab.invoiceNbr,
In<Required<xvrFSCab.invoiceNbr>>>>.Select(Base, ari.InvoiceNbr))
{
data1099Misc.Add(CreatePayerARecord(diot));
}
FixedLengthFile flatFile = new FixedLengthFile();
flatFile.WriteToFile(data1099Misc, sw);
sw.Flush();
sw.FlushAsync();
int cont = 0;
while ( cont<3)
{
cont = cont + 1;
string path = "DIOTJOSE" + ".txt";
PX.SM.FileInfo file = new PX.SM.FileInfo(path, null, stream.ToArray());
throw new PXRedirectToFileException(file, true);
}
Acumatica had the same issue when they had to open multiple reports at one click (with RedirectException).
For this reason Acumatica supports multiple RequiredException only for Reports.
They have a method called "CombineReport" that works with multiple PXReportRequiredException (PXReportsRedirectList)
Sad part is that they did not make something for other RequiredException or RedirectException
I tried to make my own "Combine" method but I was not able to create it just because the RedirectHelper.TryRedirect method use hardcoded types of the RedirectException inside body instead to use an generic or base object :(
Related
We are batch creating Views and Dependent Views (currently only ViewPlans) via the Revit API in Revit 2019, 2020, and 2022. We are seeing the following inconsistent results in all three Revit versions.
Below is a simplified code snippet. On many but not all groups of three Duplicate Views, some Shared Parameters that are set in the View Template are present in the parent view, and child duplicate views 'B' and 'C' but not child duplicate view 'A'.
using (var transactionGroup = new TransactionGroup(document, "Create views and set parameter values"))
{
transactionGroup.Start();
var sectors = new["A", "B", "C"];
var viewLookup = new Dictionary<string, ElementId>();
using (var makeViewsTransaction = new Transaction(document, "Create views"))
{
makeViewsTransaction.Start();
ViewPlan mainPlan = ViewPlan.Create(document, viewFamilyTypeId, levelId);
mainPlan.Name = "Plan_Name_Sector";
viewLookup.Add(mainPlan.Name, mainPlan.Id);
if (mainPlan.CanViewBeDuplicated(ViewDuplicateOption.AsDependent))
{
foreach (string sector in sectors)
{
string viewName = mainPlan.Name + "_" + sector;
var childPlanId = mainPlan.Duplicate(ViewDuplicateOption.AsDependent);
var childPlan = document.GetElement(childPlanId) as ViewPlan;
childPlan.Name = viewName;
viewLookup.Add(childPlan.Name, childPlan.Id);
}
}
makeViewsTransaction.Commit();
}
using (var editViewsTransaction = new Transaction(document, "Set view parameters"))
{
editViewsTransaction.Start();
foreach (var entry in viewLookup)
{
var view = document.GetElement(entry.Value) as Autodesk.Revit.DB.View;
if (paramSet.ScopeBoxId.IntegerValue != ActionBroker.EmptyElementId.IntegerValue)
{
view.get_Parameter(BuiltInParameter.VIEWER_VOLUME_OF_INTEREST_CROP).Set(scopeBoxId);
}
}
editViewsTransaction.Commit();
}
transactionGroup.Assimilate();
}
Screenshot of a result sample showing the missing parameter values.
Has anyone else experienced this?
It seems to me like a pretty straight-forward use of the Revit API, but perhaps the transaction group is introducing problems? I'm not sure what we could/should do differently to get more consistent results. Any suggestions?
I am creating a plugin that makes use of the code available from BCFier to select elements from an external server version of the file and highlight them in a Revit view, except the elements are clearly not found in Revit as all elements appear and none are highlighted. The specific pieces of code I am using are:
private void SelectElements(Viewpoint v)
{
var elementsToSelect = new List<ElementId>();
var elementsToHide = new List<ElementId>();
var elementsToShow = new List<ElementId>();
var visibleElems = new FilteredElementCollector(OpenPlugin.doc, OpenPlugin.doc.ActiveView.Id)
.WhereElementIsNotElementType()
.WhereElementIsViewIndependent()
.ToElementIds()
.Where(e => OpenPlugin.doc.GetElement(e).CanBeHidden(OpenPlugin.doc.ActiveView)); //might affect performance, but it's necessary
bool canSetVisibility = (v.Components.Visibility != null &&
v.Components.Visibility.DefaultVisibility &&
v.Components.Visibility.Exceptions.Any());
bool canSetSelection = (v.Components.Selection != null && v.Components.Selection.Any());
//loop elements
foreach (var e in visibleElems)
{
//string guid = ExportUtils.GetExportId(OpenPlugin.doc, e).ToString();
var guid = IfcGuid.ToIfcGuid(ExportUtils.GetExportId(OpenPlugin.doc, e));
Trace.WriteLine(guid.ToString());
if (canSetVisibility)
{
if (v.Components.Visibility.DefaultVisibility)
{
if (v.Components.Visibility.Exceptions.Any(x => x.IfcGuid == guid))
elementsToHide.Add(e);
}
else
{
if (v.Components.Visibility.Exceptions.Any(x => x.IfcGuid == guid))
elementsToShow.Add(e);
}
}
if (canSetSelection)
{
if (v.Components.Selection.Any(x => x.IfcGuid == guid))
elementsToSelect.Add(e);
}
}
try
{
OpenPlugin.HandlerSelect.elementsToSelect = elementsToSelect;
OpenPlugin.HandlerSelect.elementsToHide = elementsToHide;
OpenPlugin.HandlerSelect.elementsToShow = elementsToShow;
OpenPlugin.selectEvent.Raise();
} catch (System.Exception ex)
{
TaskDialog.Show("Exception", ex.Message);
}
}
Which is the section that should filter the lists, which it does do as it produces IDs that look like this:
3GB5RcUGnAzQe9amE4i4IN
3GB5RcUGnAzQe9amE4i4Ib
3GB5RcUGnAzQe9amE4i4J6
3GB5RcUGnAzQe9amE4i4JH
3GB5RcUGnAzQe9amE4i4Ji
3GB5RcUGnAzQe9amE4i4J$
3GB5RcUGnAzQe9amE4i4GD
3GB5RcUGnAzQe9amE4i4Gy
3GB5RcUGnAzQe9amE4i4HM
3GB5RcUGnAzQe9amE4i4HX
3GB5RcUGnAzQe9amE4i4Hf
068MKId$X7hf9uMEB2S_no
The trouble with this is, comparing it to the list of IDs in the IFC file that we imported it from reveals that these IDs do not appear in the IFC file, and looking at it in Revit I found that none of the Guids in Revit weren't in the list that appeared either. Almost all the objects also matched the same main part of the IDs as well, and I'm not experienced enough to know how likely that is.
So my question is, is it something in this code that is an issue?
The IFC GUID is based on the Revit UniqueId but not identical. Please read about the Element Identifiers in RVT, IFC, NW and Forge to learn how they are connected.
I have searched and found several examples of how to do this, but I can't make them work - well part of it doesn't work.
I can perform the file upload, but the following attempt to change properties fail.
I'm attempting to upload a file from a base64 payload - this part works - but when I afterwards attempt to edit the properties (custom column) associated with the file, the code fails.
Here is the code (simplified for readability):
(note that props is a collection of custom objects (FileProperty) with a name and a value attribute).
using (ClientContext context = new ClientContext("<sharepoint_server_url>"))
{
context.Credentials = new SharePointOnlineCredentials(<usr>,<secure_pwd>);
using (System.IO.MemoryStream ms = new System.IO.MemoryStream(Convert.FromBase64String(<base64_content>)))
{
File.SaveBinaryDirect(context, <relative_path>, ms, true);
}
// file is uploaded - so far so good!
// attempt to edit properties of the file.
if (props != null)
{
if (props.Count > 0)
{
File newFile = context.Web.GetFileByServerRelativeUrl(<relative_path>);
context.Load(newFile);
context.ExecuteQuery();
newFile.CheckOut();
ListItem item = newFile.ListItemAllFields;
foreach (FileProperty fp in props)
{
item[fp.name] = fp.value;
}
item.Update();
newFile.CheckIn(string.Empty, CheckinType.OverwriteCheckIn);
}
}
}
This code throws an exception in the part where I try to update the properties.
Message: The file was not found.
Can anyone tell me what is wrong with this example or provide another example on how to do this?
Also, a question - is there a way to address a file by a unique ID which is the same regardless of where in the SharePoint server the file is located or moved to?
I hope someone can help me out - thanks :)
Ok, I found a solution to my problem. I don't know why this works better, it just does.
For all I know, I'm doing the exact same thing, just in another way - maybe someone else who knows more about SharePoint than me (which isn't much) can explain why this works while the first example I posted doesn't.
Previous to the code shown, I ensure that <site_url> doesn't end with "/" and that <library_name> doesn't start or end with "/" and that <file_name> doesn't start or end with "/".
With the code below I can uplaod a file and update properties, in my case i changed "Title" and a custom column "CustCulomnA" and it workes.
using (ClientContext context = new ClientContext(<site_url>))
{
context.Credentials = new SharePointOnlineCredentials(<usr>, <secure_pwd>);
FileCreationInformation fci = new FileCreationInformation()
{
Url = <file_name>,
Content = Convert.FromBase64String(<base64_content>),
Overwrite = true
};
Web web = context.Web;
List lib = web.Lists.GetByTitle(<library_name>);
lib.RootFolder.Files.Add(fci);
context.ExecuteQuery();
response.message = "uploaded";
if (props != null)
{
if (props.Count > 0)
{
File newFile = context.Web.GetFileByUrl(<site_url> +"/"+ <library_name> + "/" + <file_name>);
context.Load(newFile);
context.ExecuteQuery();
newFile.CheckOut();
ListItem item = newFile.ListItemAllFields;
foreach (FileProperty fp in props)
{
item[fp.name] = fp.value;
}
item.Update();
newFile.CheckIn(string.Empty, CheckinType.OverwriteCheckIn);
context.ExecuteQuery();
Make sure the file server relative url is valid in this case.
For example if the complete url is:
https://zheguo.sharepoint.com/sites/test/Shared%20Documents/test.jpg
Then relative url should be
/sites/test/Shared%20Documents/test.jpg
And you can also use GetFileByUrl method, passing the complete file url like this:
clientContext.Credentials = new SharePointOnlineCredentials(userName, securePassword);
Web web = clientContext.Web;
clientContext.Load(web);
clientContext.ExecuteQuery();
File file = web.GetFileByUrl("https://zheguo.sharepoint.com/sites/test/Shared%20Documents/test.jpg");
clientContext.Load(file);
clientContext.ExecuteQuery();
file.CheckOut();
ListItem item = file.ListItemAllFields;
item["Title"] = "Test";
item.Update();
file.CheckIn(string.Empty, CheckinType.OverwriteCheckIn);
clientContext.ExecuteQuery();
}
I'm creating a file in Acumatica by calling an action from the API, so that I can retrieve the file in my application.
Is it possible to delete the file via API after I'm done with it? I'd rather not have it cluttering up my Acumatica database.
Failing this, is there a recommended cleanup approach for these files?
Found examples of how to delete a file from within Acumatica, as well as how to save a new version of an existing file! The below implementation saves a new version but has the deletion method commented out. Because I built this into my report generation process, I'm not later deleting the report via API, but it would be easy to translate a deletion into an action callable by the API.
private IEnumerable ExportReport(PXAdapter adapter, string reportID, Dictionary<String, String> parameters)
{
//Press save if the SO is not completed
if (Base.Document.Current.Completed == false)
{
Base.Save.Press();
}
PX.SM.FileInfo file = null;
using (Report report = PXReportTools.LoadReport(reportID, null))
{
if (report == null)
{
throw new Exception("Unable to access Acumatica report writer for specified report : " + reportID);
}
PXReportTools.InitReportParameters(report, parameters, PXSettingProvider.Instance.Default);
ReportNode reportNode = ReportProcessor.ProcessReport(report);
IRenderFilter renderFilter = ReportProcessor.GetRenderer(ReportProcessor.FilterPdf);
//Generate the PDF
byte[] data = PX.Reports.Mail.Message.GenerateReport(reportNode, ReportProcessor.FilterPdf).First();
file = new PX.SM.FileInfo(reportNode.ExportFileName + ".pdf", null, data);
//Save the PDF to the SO
UploadFileMaintenance graph = new UploadFileMaintenance();
//Check to see if a file with this name already exists
Guid[] files = PXNoteAttribute.GetFileNotes(Base.Document.Cache, Base.Document.Current);
foreach (Guid fileID in files)
{
FileInfo existingFile = graph.GetFileWithNoData(fileID);
if (existingFile.Name == reportNode.ExportFileName + ".pdf")
{
//If we later decide we want to delete previous versions instead of saving them, this can be changed to
//UploadFileMaintenance.DeleteFile(existingFile.UID);
//But in the meantime, for history purposes, set the UID of the new file to that of the existing file so we can save it as a new version.
file.UID = existingFile.UID;
}
}
//Save the file with the setting to create a new version if one already exists based on the UID
graph.SaveFile(file, FileExistsAction.CreateVersion);
//Save the note attribute so we can find it again.
PXNoteAttribute.AttachFile(Base.Document.Cache, Base.Document.Current, file);
}
//Return the info on the file
return adapter.Get();
}
The response from Acumatica:
S-b (Screen-base) API allows clean way of downloading report generated as file. C-b (Contract-base) simply does not have this feature added. I suggest you provided feedback here: feedback.acumatica.com (EDIT: Done! https://feedback.acumatica.com/ideas/ACU-I-1852)
I think couple of workaround are:
1) use s-b using login from c-b to generate report and get as file (see example below), or
2) create another method to delete the file once required report file is downloaded. For that, you will need to pass back FileID or something to identify for deletion.
example of #1
using (DefaultSoapClient sc = new DefaultSoapClient("DefaultSoap1"))
{
string sharedCookie;
using (new OperationContextScope(sc.InnerChannel))
{
sc.Login("admin", "123", "Company", null, null);
var responseMessageProperty = (HttpResponseMessageProperty)
OperationContext.Current.IncomingMessageProperties[HttpResponseMessageProperty.Name];
sharedCookie = responseMessageProperty.Headers.Get("Set-Cookie");
}
try
{
Screen scr = new Screen(); // add reference to report e.g. http://localhost/Demo2018R2/Soap/SO641010.asmx
scr.CookieContainer = new System.Net.CookieContainer();
scr.CookieContainer.SetCookies(new Uri(scr.Url), sharedCookie);
var schema = scr.GetSchema();
var commands = new Command[]
{
new Value { LinkedCommand = schema.Parameters.OrderType, Value = "SO" },
new Value { LinkedCommand = schema.Parameters.OrderNumber, Value = "SO004425" },
schema.ReportResults.PdfContent
};
var data = scr.Submit(commands);
if(data != null && data.Length > 0)
{
System.IO.File.WriteAllBytes(#"c:\Temp\SalesOrder.pdf",
Convert.FromBase64String(data[0].ReportResults.PdfContent.Value));
}
}
finally
{
sc.Logout();
}
}
Hope this helps. Also, it would be great if you update the stackover post based on these suggestions.
Thanks
Nayan Mansinha
Lead - Developer Support | Acumatica
Iam using the ServiceStack.Text JsonObject parser to map into my domain model. I basically have anthing working, except when using Linq to filter on ArrayObject and the try to convert it using convertAll. Iam cannot come arround actuall after using link, adding element by element to an JsonArrayObjects list and then pass it.
var tmpList = x.Object("references").ArrayObjects("image").Where(y => y.Get<int>("type") != 1).ToList();
JsonArrayObjects tmpStorage = new JsonArrayObjects();
foreach (var pic in tmpList) {
tmpStorage.Add(pic);
}
if (tmpStorage.Count > 0) {
GalleryPictures = tmpStorage.ConvertAll(RestJsonToModelMapper.jsonToImage);
}
Question:
Is there a more elegant way to get from IEnumarable back to JsonArrayObjects?
Casting will not work, since where copys elements into a list, instead of manipulating the old one, therefor the result is not an downcasted JsonArrayObjects, rather a new List object.
Best
Considering this more elegant is arguable, but I would probably do:
var tmpStorage = new JsonArrayObjects();
tmpList.ForEach(pic => tmpStorage.Add(RestJsonToModelMapper.jsonToImage(pic)));
And if this kind of conversion is used frequently, you may create an extension method:
public static JsonArrayObjects ToJsonArrayObjects(this IEnumerable<JsonObject> pics)
{
var tmpStorage = new JsonArrayObjects();
foreach(var pic in pics)
{
tmpStorage.Add(RestJsonToModelMapper.jsonToImage(pic));
}
return tmpStorage;
}
This way you would end up with simpler consumer code:
var tmpStorage = x.Object("references")
.ArrayObjects("image")
.Where(y => y.Get<int>("type") != 1)
.ToJsonArrayObjects();
Like this?
var pictures = x.Object("references")
.ArrayObjects("image")
.Where(y => y.Get<int>("type") != 1)
.Select(RestJsonToModelMapper.JsonToImage)
.ToList();