I am attempting a simple SELECT action on a Source JSON dataset in an Azure Data Factory data flow, but I am getting an error message that none of the columns from my source are valid. I use the exact configuration as the video, except instead of a CSV file, I use a JSON file.
In the video, at 1:12, you can see that after configuring the source dataset, the source projection shows all of the columns from the source schema. Below is a screen shot from the tutorial video:
image.png
And below is a screen shot from my attempt:
(I blurred the column names because they match column names from a vendor app)
Note in my projection, I am unable to modify the data types or the format. I'm not sure why not, but I don't need to modify either so I moved on. I did try with a CSV and I was able to modify the data types. I'm assuming this is a JSON thing, but I'm noting here just in case there is some configuration that I should take a look at.
At 6:48 in the video, you'll see the user add a select task, exactly as I have done. Below is a screen shot of the select task in the tutorial immediately following adding the task:
Notice the source columns all appear. Below is a screen shot of my select task:
I'm curious why the column names are missing? If I type them in manually, I get an error: "Column not found"
For reference, below are screen shots of my Data Source setup. I'm using a Data Lake Storage Gen2 Linked Service connected via Managed Identity and the AutoResolvingIntegrationRuntime.
Note that I tried to do this with a CSV as well. I was able to edit the datatype and format on a CSV, but I get the same column not found error on the next step.
Try doing this in a different browser or clear your browser cache. It may just be a formatting thing in the auto-generated JSON. This has happened to me before.
Related
I am loading data via pipelines in ADLS gen2 container.
Now I want to create a table that has details that when the pipeline start running and then completed. like below fields
where
startts - start time of job
endts- end time of job
but extractts is the extraction time which is what i want to create.. is there any approch i can create table ?? help will be really appreciated.
It might not be exactly what you need, but you can add columns inside a copy activity with the property #pipeline().TriggerTime to get your "startts" field. Go to the bottom option on the source tab and select new, and on value choose "Add dynamic content".
Here is an example
For your "extractts" you could use the property of the activity called "executionDuration", which will give you the time it took to adf to run the activity in seconds. Again, use dynamic content and get this: #activity('ReadFile').output.executionDuration, then store this value wherever you need.
In this example I'm storing the value in a variable, but you could use it anywhere.
I dont understand your need for a field "endts", I would just simply do startts+extractts to get it.
I am mostly new to PowerApps but I am creating a stand-alone app that uses Power Automate to upload records to a SharePoint library. I have that working, and on one of the forms I have search functionality so that the gallery is filtered by the value in a few text boxes.
What I want to do is on the form where they upload new records, I want to look up to see if metadata already exists and eventually populate text boxes with the information from those records. Right now I am using a button with this code in the OnSelect event to just find how many records exist with that project number:
Set(varCount, CountRows(Filter(ProjectDocuments,'txtProj#'.Text, ProjNum )))
I also tried this.
If(CountRows(Filter(ProjectDocuments,'txtProj#'.Text, ProjNum )) > 0, Set(varCount, 1), Set(varCount,2))
I am getting a warning about large datasets only. However, when I try to run it nothing happens. When I look at the monitor details I get this error first:
"The query is not valid.\r\nclientRequestId: 816f2bfb-ab50-4285-b9c9-a7e03548d15f\r\nserviceRequestId: 816f2bfb-ab50-4285-b9c9-a7e03548d15f"
Then this one:
"Error when trying to retrieve data from the network"
The connection works when I am filtering a gallery, but not when I'm trying to use the count. Does anyone have any idea what is happening?
Thanks!
I'm not sure, but I suspect that there's an error in your Filter. The second argument should result in a logical true/false (include the record or not). Yours looks to resolve to a string.
Filter(ProjectDocuments,'txtProj#'.Text, ProjNum )
I think you mean
Filter(ProjectDocuments,'txtProj#'.Text = ProjNum )
While adding fields, I am facing issue that I am not able to open my table if there is new field added. it shows error like this:
Whenever a field is added, it shows an error like this:
when I check the event viewer, it shows this error:
Full error code :
Can anyone help me with this issue?
It looks like SalesTable is not properly sync'd. This type of error usually just means you need to compile & sync the entire environment to get things "working" better.
Remove your custom fields, compile the table, right click on the table, and synchronize and verify first you can successfully synchronize the individual table. If you can sync it, then sync your entire data dictionary to ensure everything is sync'd up...if one table is off (in this way), it can mean there are other tables as well.
You can choose to try and add fields and see if it works, but I would just start with a full system compile & sync.
Note: Your infolog screenshots cut off the error message. When asking a question, make sure to include the actual error messages and redact personal information. Your event viewer screenshot is also cut off...same thing.
Identify that the ID in SQL is not match with the Table name at AOT .
Problem: No data is appearing in SSMS (Sql Server Management Studio)
I don't see any errors appearing and my job diagram successfully shows a process from input to output.
I'm trying to use the continuous export feature of Azure Application Insights, Stream Analytics, and SQL Database.
Here is my query:
SELECT
A.context.data.eventTime as eventTime,
A.context.device.type as deviceType,
A.context.[user].anonId as userId,
A.context.device.roleInstance as machineName
INTO DevUserlgnsOutput -- Output Name
FROM devUserlgnsStreamInput A -- Input Name
I tested the query with sample data and the output box below the query and it returned what I expected, so I don't think the query itself is the issue.
I also know that the custom events I'm trying to display the attributes of have occurred since I began the job. My job is also still running and has not stopped since its creation.
In addition, I would like to point out that the monitoring graph on the stream analytics page detects 0 inputs, 0 outputs, and 0 runtime errors.
Thank you in advance for the help!
Below are some pictures that might help:
Stream Analytics Output Details
The Empty SSMS after I clicked "display top 1000 rows," which should be filled with data
No input events, output events, or runtime errors for the stream analytics job
I've had this issue twice with 2 separate application insights, containers, jobs, etc. Both times I solved this by editing the path pattern of my input(s) to my job.
To navigate to the necessary blade to make the following changes:
1) Click on your stream analytics job
2) Click "inputs" under the "job topology" section of the blade
3) Click your input (if multiple inputs, do this to 1 at a time)
4) Use the blade that pops up on the right side of the screen
The 4 potential solutions I've come across are ( A-D in bold):
A. Making sure the path pattern you enter is plain text with no hidden characters (sometimes copying it from the container on Azure made it not plain text).
*Steps:*
1) Cut the path pattern you have already in the input blade
2) Paste it into Notepad and re-copy it
3) Re-paste it into the path pattern slot of your input
B. Append your path pattern with /{date}/{time}
Simply type this at the end of your path pattern in the blade's textbox
C. Remove the container name and the "/" that immediately follows it from the beginning of your path pattern (see picture below)
Edit path pattern
Should be self-explanatory after seeing the pic.
D. Changing the date format of your input to YYYY-MM-DD in the drop-down box.
Should also be self-explanatory (look at the above picture if not).
Hope this helps!!
I am using the test plugin for VS 2012 (although have just installed 2013), and need to know:
Is it possible to have a parameter pass a different value from a selected list while load testing?
I have used the sample load test located here: http://www.visualstudio.com/get-started/load-test-your-app-vs and created a new web test that meets my needs as below.
I have a simple journey recorded that is an email registration web page. The journey is essentially completing name & address, email, conf email, password, conf password. On submission of the form, a verification email is sent.
I need to check that this process can handle around 3000 users. The email to actually send the verification has been hardcoded for test purposes, but I need a unique email to submit the form. I would essentially like to run 3000 test cases through, and just change the email address each time.
What is the best way to do this?
The simple answer is do a web search for data driving (or data driven) Visual Studio web performance tests. You should find many articles and tutorials.
In more detail:
Outline of how to data drive a test
Firstly, Visual Studio distinguishes different types of test. A Load Test is a way of running individual test cases many times, as if by many simultaneous users, gathering data about the test executions and producing a report. The test cases that a load test can execute include Web Performance Tests and Coded UI Tests; both of these can be data driven.
Data driving a Web Performance Test requires a data source. The data can be CSV, XML, Spreadsheet, database and in TFS. I will describe using CSV.
Create a CSV file, containing something similar to the following. Note that the top line of field names is required and those names are used within the test.
Name,Email,Telephone
Fred,fred#example.com,0123 456789
George,george#example.com,0123 456790
Harry,harry#example.com,0123 456791
See also CodedUI test does not read data from CSV input file for some notes CSV file creation.
Open the test project in Visual Studio and open the .webtest file for the test. Use the context (right-click) menu of the top node of the test, ie the test's name (or use the corresponding icon) and select "Add data source ...". Follow the prompts to add the CSV file into the project.
Within the Web Performance Test expand the request to show the form parameters or query string or whatever that is to use the data. View the properties panel of the relevant field and select the appropriate property, in many cases it is the Value property. Click the little triangle for choosing a value for the property. The popup should show the data source, expand the items shown and select the required field. After selecting the field the property will show a value such as {{DataSource1.FileName#csv.Email}}. The doubled curly braces ({{ and }}) indicate the use of a context parameter. All the used data source fields are available as context parameters. All of the data source fields can be made available by altering the Select Columns property of the data source file. Data source field can be used as part of a property value by using values such as
SomeText{{DataSource1.FileName#csv.Email}}AndMoreText
Data source access methods
The data from the datasource can be read and used in four ways. The default is Sequential. Other orders are selected using Solution Explorer to access the properties of the file (eg FileName#csv). The Access Method property can be set to one of:
Sequential data is read sequentially through the file. After the last line of the file is read, the first line of the file will be next line to be read. Thus each line may be read more than once.
Random data is read randomly.
Unique data is read sequentially through the file. After the end of the file is read the test will not be executed again. Thus each line in can only be read once.
Do not move cursor automatically intended for more complex tests where the cursor is moved via calls from plugins.
A web test may use more than one data source file. These files may have different access methods. For example one file containing login names and passwords could be accessed Sequentially and another file with other data could be accessed Randomly. This would allow each login to try many different sets of the other data.
Data sources and loops
Web performance tests may contain loops. The properties of a loop include Advance data cursors. This allows, for example, a data source file to contain items to be found and added to a shopping basket such that each loop iteration adds a new item.