Jira is not recognising my upload from CSV - excel

I'm creating a CSV template for some analysts, they would need to fill it and I then do a bulk upload to Jira.
I want to upload them as defects. The issue I'm facing is:
I have a label when filling out a defect and I want to select one of the options, so for example I have a label called 'Label A' and it has 3 options in a list.
In the excel file I put the top row as 'Label A' and under it for one of the entries I put the full name of one of the options (Displayed on JIRA) for example 'Option A'. But I write this in the excel file as : Option A
But after uploading it does not recognise this and returns a validation error.
This is the same for a tick box label, for e.g. 'Label B'
However any text that I put up, (Something that requires free text and is not a multiple option) like for example 'Summary', I would put any random text e.g. 'abcd', and this will validate fine.
So my question is, what am I doing wrong with the way I'm formatting my CSV for when I upload answers to multiple choice parts of a defect?

I think if you can create a sample issue (like you need to be upload) in jira then you can export(Export all fields) that created issue and analyse the output excel file. then you can understand the input format that jira required form your CSV file.
UPDATED
the other thing you can do is read the JIRA log file it will tell you the actual error occurred some times.
are you export your created issue with this option?..see screenshot below..

The approach will depend on the field types you are using.
For example, if you were loading a simple text field then the text in the CSV file will just be inserted in to the text field.
If, however, you are populating a custom field that is represented by a radio button or a drop-down listbox then you will need to use the field mapping option that is offered during the CSV import.
Say you had a radio button that said either 'true' or 'false'. You would tick the mapping option for this field during the CSV import and configure it to map true -> true and false -> false. You can also do this mapping in the CSV file itself.
You can see more details on this link:
Atlassian - Importing Data from CSV

The approach you can follow is as below:
Count the number of labels in the Issue you are trying to import.
Every label should go into its own separate column for it to be imported properly.
Eg:- If there are 5 labels for an issue, create 5 Labels_CSV(or what suits you) column in the CSV header row and put the 5 labels in the data row.
Once the CSV is created, try to upload it with your existing config file which has mapping for Labels_CSV --> Labels.
Voila, the multiple labels will be imported properly.
Let me know if you have any queries.

Related

Azure Data Pipeline Copy Activity loses column names when copying from SAP Hana to Data Lake Store

I am trying to copy data from SAP Hana to Azure Data Lake Store (DLS) using a Copy Activity in a Data Pipeline via Azure Data Factory.
Our copy activity runs fine and we can see that rows made it from Hana to the DLS, but they don't appear to have column names (instead they are just given 0-indexed numbers).
This link says “For structured data sources, specify the structure section only if you want map source columns to sink columns, and their names are not the same.”
We are fine using the original column names from the SAP Hana table, so it seems like we shouldn't need to specify the structure section in our dataset. However, even when we do, we still just see numbers for column names.
We have also seen the translator property at this link, but are not sure if that is the route we need to go.
Can anyone tell me why we aren't seeing the original column names copied into DLS and how we can change that? Thank you!
UPDATE
Setting the firstRowAsHeader property of the format section on our dataset to true basically solved the problem. The console still shows the numerical indices, but now includes the headers we are after as the first row. Upon downloading and opening the file, we can see the numbers are not there (the console just shows them for whatever reason), and it is a standard comma-delimeted file with a header row and one row entry per line.
Example:
COLUMNA,COLUMNB
aVal1,bVal1
aVal2,bVal2
We can now tell our sources and sinks to write and expect this format when reading.
BONUS UPDATE:
To get rid of the numerical indices and see the proper column headers in the console, click Format in the top-left corner, and then check the "First row is a header" box toward the bottom of the resulting blade
See the update above.
The format.firstRowAsHeader property needed to be set to true

Eviews can't export to Excel when print titles are defined

We want to save time when exporting to Excel from Eviews. To do this we have export a long list of variables in one command. Totally almost 800 rows are exported at once. We want to have titles on top of each page. To be readable we have at most about 70 rows per page. We can not put titles on the rows that are on top of the page, because then the titles would be written over by Eviews, so we use the Print titles options in Excel.
The problem is that when we use Print titles we get an error we do not get otherwise, and afterwards the excel file looses all old data.
The name cannot be the same as a built in name.
Old name: Print_Titles
Is there a way to get around this?
EDIT: Depending on how print_titles are stored in the xlsx-file, the file can either get overwritten or only the print titles get "removed". When unzipping the xlsx-file and reading an xml I get this row:
<definedName name="_xlnm.Print_Titles" localSheetId="0">MySheet!$1:$1</definedName>
if I do the same after export _xlnm. is removed and I get this:
<definedName name="Print_Titles" localSheetId="0">MySheet!$1:$1</definedName>
And then the print titles don't work and has to be reset manually again.
This seems to be a bug in either excel or eviews. After lots of contact with people at Eviews, we got a workaround, that is using excel 2003 file format. Then print titles are not removed.

Attribute field issue when editing layer properties (ArcGIS)

I am working with an XY dataset imported to Arc from a csv file. The data appears to be importing correctly, and all of data is displaying properly, and is shown in the attribute table.
However when I try to edit the layer properties using a graduated color analysis in the symbology, one of the relevant attribute fields is not displaying. The imported data has proper column headings naming each column, and they appear exactly as they should in the attribute table. Looking at the "Fields" tab in the layer properties, all attribute fields are present, and selected to appear, including the one that does NOT appear in the drop down menu when I try to do a graduated color analysis.
Adding to my frustration is the fact that the field I am interested in was appearing and I was able to work with it yesterday. It disappeared when I exported the data as a shapefile. I have imported two different csv files as layers, and when I import the other csv the relevant field appears as expected, including after the shapefile export. It simply eludes me for this one csv file. I have tried re downloading the data (originally a txt file) and then importing to Excel, and saving as a csv. But the problem persists. Each time I import this data the needed field does not appear in the drop down menu.
Has anyone encountered a similar situation?

Specify Excel 'Display Format' while importing data

I'm currently using an excel document as a template for generating a report. This is done by first specifying an 'Xml Map' in Excel and then importing data against it. The report generation works fine.
The problem is that I want the display format on the cell to be 'General' and not 'Text' after the import. I came across this link (yes, Excel 2007)
http://office.microsoft.com/en-gb/excel-help/xml-schema-definition-xsd-data-type-support-HP010206414.aspx#BMxsdexport
The link specifies that Excel will set string data from the xml import to display as 'Text' by default. I need this to be displayed as 'General' instead. Is there a way to do this?
The only solution I've come up with so far is to use a macro to change the display format after opening the document but if I can do it using only Excel settings it would be better.
Try to use the text import feature: http://office.microsoft.com/en-us/excel-help/text-import-wizard-HP010102244.aspx
NOTE: the important step that should address your need is the "Column data format" section, which often gets overlooked as it is the last step of the import. I hope that helps.
The mapping cannot be changed.
http://social.technet.microsoft.com/Forums/en-US/fdf99171-0a53-4716-9e72-25afc36ddf90/specify-excel-display-format-while-importing-data

Skipping rows when importing Excel into SQL using SSIS 2008

I need to import sheets which look like the following:
March Orders
***Empty Row
Week Order # Date Cust #
3.1 271356 3/3/10 010572
3.1 280353 3/5/10 022114
3.1 290822 3/5/10 010275
3.1 291436 3/2/10 010155
3.1 291627 3/5/10 011840
The column headers are actually row 3. I can use an Excel Sourch to import them, but I don't know how to specify that the information starts at row 3.
I Googled the problem, but came up empty.
have a look:
the links have more details, but I've included some text from the pages (just in case the links go dead)
http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/97144bb2-9bb9-4cb8-b069-45c29690dfeb
Q:
While we are loading the text file to SQL Server via SSIS, we have the
provision to skip any number of leading rows from the source and load
the data to SQL server. Is there any provision to do the same for
Excel file.
The source Excel file for me has some description in the leading 5
rows, I want to skip it and start the data load from the row 6. Please
provide your thoughts on this.
A:
Easiest would be to give each row a number (a bit like an identity in
SQL Server) and then use a conditional split to filter out everything
where the number <=5
http://social.msdn.microsoft.com/Forums/en/sqlintegrationservices/thread/947fa27e-e31f-4108-a889-18acebce9217
Q:
Is it possible during import data from Excel to DB table skip first 6 rows for example?
Also Excel data divided by sections with headers. Is it possible for example to skip every 12th row?
A:
YES YOU CAN. Actually, you can do this very easily if you know the number columns that will be imported from your Excel file. In
your Data Flow task, you will need to set the "OpenRowset" Custom
Property of your Excel Connection (right-click your Excel connection >
Properties; in the Properties window, look for OpenRowset under Custom
Properties). To ignore the first 5 rows in Sheet1, and import columns
A-M, you would enter the following value for OpenRowset: Sheet1$A6:M
(notice, I did not specify a row number for column M. You can enter a
row number if you like, but in my case the number of rows can vary
from one iteration to the next)
AGAIN, YES YOU CAN. You can import the data using a conditional split. You'd configure the conditional split to look for something in
each row that uniquely identifies it as a header row; skip the rows
that match this 'header logic'. Another option would be to import all
the rows and then remove the header rows using a SQL script in the
database...like a cursor that deletes every 12th row. Or you could
add an identity field with seed/increment of 1/1 and then delete all
rows with row numbers that divide perfectly by 12. Something like
that...
http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/847c4b9e-b2d7-4cdf-a193-e4ce14986ee2
Q:
I have an SSIS package that imports from an Excel file with data
beginning in the 7th row.
Unlike the same operation with a csv file ('Header Rows to Skip' in
Connection Manager Editor), I can't seem to find a way to ignore the
first 6 rows of an Excel file connection.
I'm guessing the answer might be in one of the Data Flow
Transformation objects, but I'm not very familiar with them.
A:
Question Sign in to vote 1 Sign in to vote rbhro, actually there were
2 fields in the upper 5 rows that had some data that I think prevented
the importer from ignoring those rows completely.
Anyway, I did find a solution to my problem.
In my Excel source object, I used 'SQL Command' as the 'Data Access
Mode' (it's drop down when you double-click the Excel Source object).
From there I was able to build a query ('Build Query' button) that
only grabbed records I needed. Something like this: SELECT F4,
F5, F6 FROM [Spreadsheet$] WHERE (F4 IS NOT NULL) AND (F4
<> 'TheHeaderFieldName')
Note: I initially tried an ISNUMERIC instead of 'IS NOT NULL', but
that wasn't supported for some reason.
In my particular case, I was only interested in rows where F4 wasn't
NULL (and fortunately F4 didn't containing any junk in the first 5
rows). I could skip the whole header row (row 6) with the 2nd WHERE
clause.
So that cleaned up my data source perfectly. All I needed to do now
was add a Data Conversion object in between the source and destination
(everything needed to be converted from unicode in the spreadsheet),
and it worked.
My first suggestion is not to accept a file in that format. Excel files to be imported should always start with column header rows. Send it back to whoever provides it to you and tell them to fix their format. This works most of the time.
We provide guidance to our customers and vendors about how files must be formatted before we can process them and it is up to them to meet the guidlines as much as possible. People often aren't aware that files like that create a problem in processing (next month it might have six lines before the data starts) and they need to be educated that Excel files must start with the column headers, have no blank lines in the middle of the data and no repeating the headers multiple times and most important of all, they must have the same columns with the same column titles in the same order every time. If they can't provide that then you probably don't have something that will work for automated import as you will get the file in a differnt format everytime depending on the mood of the person who maintains the Excel spreadsheet. Incidentally, we push really hard to never receive any data from Excel (only works some of the time, but if they have the data in a database, they can usually accomodate). They also must know that any changes they make to the spreadsheet format will result in a change to the import package and that they willl be charged for those development changes (assuming that these are outside clients and not internal ones). These changes must be communicated in advance and developer time scheduled, a file with the wrong format will fail and be returned to them to fix if not.
If that doesn't work, may I suggest that you open the file, delete the first two rows and save a text file in a data flow. Then write a data flow that will process the text file. SSIS did a lousy job of supporting Excel and anything you can do to get the file in a different format will make life easier in the long run.
My first suggestion is not to accept a file in that format. Excel files to be imported should always start with column header rows. Send it back to whoever provides it to you and tell them to fix their format. This works most of the time.
Not entirely correct.
SSIS forces you to use the format and quite often it does not work correctly with excel
If you can't change he format consider using our Advanced ETL Processor.
You can skip rows or fields and you can validate the data the way you want.
http://www.dbsoftlab.com/etl-tools/advanced-etl-processor/overview.html
Sky is the limit
You can just use the OpenRowset property you can find in the Excel Source properties.
Take a look here for details:
SSIS: Read and Export Excel data from nth Row
Regards.

Resources