Pentaho Data Integration Mapping - excel

I am using Pentaho Data Integration, I created a new transformation and I have 2 steps in it....1 is a CSV file of my data, the second is an Excel file with two columns one is are the state names and the other the sort form of that state name, Example ("New York" "NY")
In my CSV file I have a state columns with the state names "New York" I want to use my excel file to map "New York" with "NY"
I have googled this all day with no clear answer...can anyone help?

You can use Merge Join. Using this you can merge both the files and select the desired columns. Before merging, you have to sort those files according to fields which use are using for mapping. In your case, it will be state name.

I would recommend you to use stream lookup to do this task. Check the test transformation attached. It will do your task.
<?xml version="1.0" encoding="UTF-8"?>
<transformation-steps>
<steps>
<step>
<name>EXCEL</name>
<type>DataGrid</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<fields>
<field>
<name>State</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
<field>
<name>Short_state</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
</fields>
<data>
<line> <item>New York</item><item>TX</item> </line>
<line> <item>Texas</item><item>TX</item> </line>
</data>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>392</xloc>
<yloc>80</yloc>
<draw>Y</draw>
</GUI>
</step>
<step>
<name>CSV</name>
<type>DataGrid</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<fields>
<field>
<name>Full_state_name</name>
<type>String</type>
<format/>
<currency/>
<decimal/>
<group/>
<length>-1</length>
<precision>-1</precision>
<set_empty_string>N</set_empty_string>
</field>
</fields>
<data>
<line> <item>New York</item> </line>
<line> <item>Texas</item> </line>
</data>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>511</xloc>
<yloc>169</yloc>
<draw>Y</draw>
</GUI>
</step>
<step>
<name>Stream lookup</name>
<type>StreamLookup</type>
<description/>
<distribute>Y</distribute>
<custom_distribution/>
<copies>1</copies>
<partitioning>
<method>none</method>
<schema_name/>
</partitioning>
<from>EXCEL</from>
<input_sorted>N</input_sorted>
<preserve_memory>Y</preserve_memory>
<sorted_list>N</sorted_list>
<integer_pair>N</integer_pair>
<lookup>
<key>
<name>Full_state_name</name>
<field>State</field>
</key>
<value>
<name>State</name>
<rename>State</rename>
<default/>
<type>String</type>
</value>
<value>
<name>Short_state</name>
<rename>Short_state</rename>
<default/>
<type>String</type>
</value>
</lookup>
<cluster_schema/>
<remotesteps> <input> </input> <output> </output> </remotesteps> <GUI>
<xloc>510</xloc>
<yloc>79</yloc>
<draw>Y</draw>
</GUI>
</step>
</steps>
<order>
<hop> <from>EXCEL</from><to>Stream lookup</to><enabled>Y</enabled> </hop>
<hop> <from>CSV</from><to>Stream lookup</to><enabled>Y</enabled> </hop>
</order>
<notepads>
</notepads>
<step_error_handling>
</step_error_handling>
</transformation-steps>

Related

BeanIo/xsd not throwing exceptions with blank file / no records present

I want Bean.Io mapping to through exception when records are not present in file(Blank File). But it's not happening.Though it has validation occurs="0+" in place . Also tried minOccurs=1 maxOccurs=unbounded
My mapping file
<?xml version="1.0" encoding="UTF-8"?>
<beanio xmlns="http://www.beanio.org/2012/03">
<stream name="Records" format="fixedlength" strict="true">
<record name="SampleRecord" class="com.test.SampleRecord" **occurs="0+"**>
<field name="mobileNumber" type="string" position="0" length="10" regex="[0-9]*" required="true"/>
<field name="alternateMobileNumber" type="string" position="10" length="20" regex="[0-9]*" required="false"/>
</record>
</stream>
</beanio>
You can try this mapping.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<beanio xmlns="http://www.beanio.org/2012/03"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.beanio.org/2012/03 http://www.beanio.org/2012/03/mapping.xsd">
<stream name="Records" format="fixedlength" strict="true" minOccurs="1">
<record name="SampleRecord" class="com.test.SampleRecord" occurs="0+">
<field name="mobileNumber" type="string" position="0" length="10" regex="[0-9]*" required="true"/>
<field name="alternateMobileNumber" type="string" position="10" length="20" regex="[0-9]*" required="false"/>
</record>
</stream>
</beanio>
Note the attribute minOccurs="1" on the stream element. The documentation states this:
minOccurs - The minimum number of times the record layout must be read
from an input stream. Defaults to 0.
Thus, changing minOccurs to 1 causes BeanIO to throw an exception with an empty string as input.

move an element to another element or create a new one if it does not exist using xslt-3

using xslt 3, i need to take all content elements' values, and move them to the title elements (if the title elements already exist in a record, they need to be appended with a separator like -) i now have inputted my real data, since the below solution does not solve the problem when implemented to something like:
example input:
<data>
<RECORD ID="31365">
<no>25099</no>
<seq>0</seq>
<date>2/4/2012</date>
<ver>2/4/2012</ver>
<access>021999</access>
<col>GS</col>
<call>889</call>
<pr>0</pr>
<days>0</days>
<stat>0</stat>
<ch>0</ch>
<title>1 title</title>
<content>1 content</content>
<sj>1956</sj>
</RECORD>
<RECORD ID="31366">
<no>25100</no>
<seq>0</seq>
<date>2/4/2012</date>
<ver>2/4/2012</ver>
<access>022004</access>
<col>GS</col>
<call>8764</call>
<pr>0</pr>
<days>0</days>
<stat>0</stat>
<ch>0</ch>
<sj>1956</sj>
<content>1 title</content>
</RECORD>
</data>
expected output:
<data>
<RECORD ID="31365">
<no>25099</no>
<seq>0</seq>
<date>2/4/2012</date>
<ver>2/4/2012</ver>
<access>021999</access>
<col>GS</col>
<call>889</call>
<pr>0</pr>
<days>0</days>
<stat>0</stat>
<ch>0</ch>
<title>1 title - 1 content</title>
<sj>1956</sj>
</RECORD>
<RECORD ID="31366">
<no>25100</no>
<seq>0</seq>
<date>2/4/2012</date>
<ver>2/4/2012</ver>
<access>022004</access>
<col>ΓΣ</col>
<call>8764</call>
<pr>0</pr>
<days>0</days>
<stat>0</stat>
<ch>0</ch>
<sj>1956</sj>
<title>1 title</title>
</RECORD>
<data>
with my attempt, i did not manage to move the elements, i just got an empty line where the content element existed, so please add the removal of blank lines in the suggested solution.
i believe the removal of blank lines could be fixed with the use of
<xsl:template match="text()"/>
One way to achieve this is the following template. It uses XSLT-3.0 content value templates.
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="3.0" expand-text="true">
<xsl:output method="xml" indent="yes" />
<xsl:mode on-no-match="shallow-copy" />
<xsl:strip-space elements="*" /> <!-- Remove space between elements -->
<xsl:template match="RECORD">
<xsl:copy>
<xsl:copy-of select="#*" />
<title>{title[1]}{if (title[1]) then ' - ' else ''}<xsl:value-of select="content" separator=" " /></title>
<xsl:apply-templates select="node() except (title,content)" />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
It's output is as desired.
If you want to separate the <content> elements with a -, too, you can simplify the core <title> expression to
<xsl:value-of select="title|content" separator=" - " />
EDIT:
All I changed was replacing chapter with RECORD, and it's working fine with Saxon-HE 9.9.1.4J. The only difference in the output is that the title element is always at the first position, but that shouldn't matter. I also added a directive to remove space between elements.

how to run the cron job in odoo

Created the cron job for fetching weather information on every 1 minute but it not work. Here, I attach the code (.py function).
#api.model
def weather_cron(self):
weather_location_ids =self.env['weather_location.weather_location'].search([])
for weather_location_id in weather_location_ids:
url_cron = weather_location_id.api_address + weather_location_id.name
json_data = requests.get(url_cron).json()
formatted_data = json_data['weather'][0]['main']
formatted_data1 = json_data['main']['temp']
formatted_data2 = json_data['main']['temp_min']
formatted_data3 = json_data['main']['temp_max']
formatted_data4 = json_data['main']['humidity']
self.env['current_weather.current_weather'].create({
'weather_id':weather_location_id.id,
'main':formatted_data,
'temp':formatted_data1,
'temp_min':formatted_data2,
'temp_max':formatted_data3,
'humidity':formatted_data4,
})
Cron Job (.xml file):
<?xml version="1.0" encoding="UTF-8"?>
<odoo>
<data noupdate="1">
<record forcecreate="True" id="create_weather_info_cron" model="ir.cron">
<field name="name">Weather Information</field>
<field name="user_id" ref="base.user_root"/>
<field name="active" eval="False" />
<field name="interval_number">1</field>
<field name="interval_type">minutes</field>
<field name="numbercall">-1</field>
<field name="doall" eval="False"/>
<field name="model" eval="'weather_location.weather_location'"/>
<field name="function" eval="'weather_cron'"/>
</record>
</data>
</odoo>
You made the cron job inactive. Since it is not active this will not trigger the function you wrote. Please change the value to active as True
<field name="active" eval="True"/>
All your field are correct add this two:
<field name="args" eval="'()'"/>
<!-- delay the call 2 minutes just to make sure but it's optional -->
<field name="nextcall" eval="(DateTime.now() + timedelta(minutes=2)).strftime('%Y-%m-%d 00:00:00')" />
Now if the code sill not working you need to make sure of it.
#1- check that your file is in the __openerp__.py or __manifest__.py
#2- if you don't know how to use debug mode in your IDE just use logging to see if Odoo calls your methodname
Hope this Helps you
One thing if you used noupdate="1" in your xml file odoo will not update the record that it's inserted in first time no matter what you change in the code this will not effect the recrod in database.
just change the id and delete the ir.cron record manually from setting menu
EDITS:
every model with "model.name" have an xml_id like this model_model_name
when you see mode_id remember to prefix the name with _model
<field name="model_id" ref="weather_location.model_weather_location"/>
and they are in the same module just put ref="model_weather_location"
But for ir.cron just give the name of the model because it's a Char field not many2one:
<field name="model" eval="'weathe.location'"/>

How to parse nested XML inside textfile using Spark RDD?

I have an xml like:
1234^12^999^`<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>`^23232
We can parse normal xml file easily using scala XML support or even using databricks xml format, but how do I parse the xml embedded inside text.
XML data alone can be extracted using:
val top5duration = data.map(line => line.split("^")).filter(line => {line(2)==100}).map(line => line(4))
But how do I proceed if i want to extract values for each 'key?
Question: how are the nested XML elements treated? How would I access
them?
For flattening nested structure you can use explode...
example : lets say I want every title (String type) /
authors(WrappedArray) combinations, can achieve it with explode :
schema :
root
|-- title: string (nullable = true)
|-- author: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- initial: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- lastName: string (nullable = true)
show()
+--------------------+--------------------+
| title| author|
+--------------------+--------------------+
|Proper Motions of...|[[WrappedArray(J,...|
|Catalogue of 2055...|[[WrappedArray(J,...|
| null| null|
|Katalog von 3356 ...|[[WrappedArray(J)...|
|Astrographic Cata...|[[WrappedArray(P)...|
|Astrographic Cata...|[[WrappedArray(P)...|
|Results of observ...|[[WrappedArray(H,...|
| AGK3 Catalogue|[[WrappedArray(W)...|
|Perth 70: A Catal...|[[WrappedArray(E)...|
import org.apache.spark.sql.functions;
DataFrame exploded = src.select(src.col("title"),functions.explode(src.col("author")).as("auth"))
.select("title","auth.initial","auth.lastName");
exploded = exploded.select(exploded.col("initial"),
exploded.col("title").as("title"),
exploded.col("lastName"));
exploded.printSchema
exploded.show
root
|-- initial: array (nullable = true)
| |-- element: string (containsNull = true)
|-- title: string (nullable = true)
|-- lastName: string (nullable = true)
+-------+--------------------+-------------+
|initial| title| lastName|
+-------+--------------------+-------------+
| [J, H]|Proper Motions of...| Spencer|
| [J]|Proper Motions of...| Jackson|
| [J, H]|Catalogue of 2055...| Spencer|
sample xml file
<?xml version='1.0' ?>
<!DOCTYPE datasets SYSTEM "http://www.cs.washington.edu/research/projects/xmltk/xmldata/data/nasa/dataset_053.dtd">
<datasets>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Proper Motions of Stars in the Zone Catalogue -40 to -52 degrees
of 20843 Stars for 1900</title>
<altname type="ADC">1005</altname>
<altname type="CDS">I/5</altname>
<altname type="brief">Proper Motions in Cape Zone Catalogue -40/-52</altname>
<reference>
<source>
<other>
<title>Proper Motions of Stars in the Zone Catalogue -40 to -52 degrees
of 20843 Stars for 1900</title>
<author>
<initial>J</initial>
<initial>H</initial>
<lastName>Spencer</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Jackson</lastName>
</author>
<name>His Majesty's Stationery Office, London</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1936</year>
</date>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog, listing the proper motions of 20,843 stars
from the Cape Astrographic Zones, was compiled from three series of
photographic plates. The plates were taken at the Royal Observatory,
Cape of Good Hope, in the following years: 1892-1896, 1897-1910,
1923-1928. Data given include centennial proper motion, photographic
and visual magnitude, Harvard spectral type, Cape Photographic
Durchmusterung (CPD) identification, epoch, right ascension and
declination for 1900.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="czc.dat">
<title>The catalogue</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>---</name>
<definition>Number 5</definition>
<units>---</units>
</field>
<field>
<name>CZC</name>
<definition>Catalogue Identification Number</definition>
<units>---</units>
</field>
<field>
<name>Vmag</name>
<definition>Visual Magnitude</definition>
<units>mag</units>
</field>
<field>
<name>RAh</name>
<definition>Right Ascension for 1900 hours</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Right Ascension for 1900 minutes</definition>
<units>min</units>
</field>
<field>
<name>RAcs</name>
<definition>Right Ascension seconds in 0.01sec 1900</definition>
<units>0.01s</units>
</field>
<field>
<name>DE-</name>
<definition>Declination Sign</definition>
<units>---</units>
</field>
<field>
<name>DEd</name>
<definition>Declination for 1900 degrees</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Declination for 1900 arcminutes</definition>
<units>arcmin</units>
</field>
<field>
<name>DEds</name>
<definition>Declination for 1900 arcseconds</definition>
<units>0.1arcsec</units>
</field>
<field>
<name>Ep-1900</name>
<definition>Epoch -1900</definition>
<units>cyr</units>
</field>
<field>
<name>CPDZone</name>
<definition>Cape Photographic
Durchmusterung Zone</definition>
<units>---</units>
</field>
<field>
<name>CPDNo</name>
<definition>Cape Photographic Durchmusterung Number</definition>
<units>---</units>
</field>
<field>
<name>Pmag</name>
<definition>Photographic Magnitude</definition>
<units>mag</units>
</field>
<field>
<name>Sp</name>
<definition>HD Spectral Type</definition>
<units>---</units>
</field>
<field>
<name>pmRAs</name>
<definition>Proper Motion in RA
<footnote>
<para>the relation is pmRA = 15 * pmRAs * cos(DE)
if pmRAs is expressed in s/yr and pmRA in arcsec/yr</para>
</footnote>
</definition>
<units>0.1ms/yr</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper Motion in RA</definition>
<units>mas/yr</units>
</field>
<field>
<name>pmDE</name>
<definition>Proper Motion in Dec</definition>
<units>mas/yr</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>SSDOO/ADC</affiliation>
</creator>
<date>
<year>1995</year>
<month>Nov</month>
<day>03</day>
</date>
</ingest>
</history>
<identifier>I_5.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Catalogue of 20554 Faint Stars in the Cape Astrographic Zone -40 to -52 Degrees
for the Equinox of 1900.0</title>
<altname type="ADC">1006</altname>
<altname type="CDS">I/6</altname>
<altname type="brief">Cape 20554 Faint Stars, -40 to -52, 1900.0</altname>
<reference>
<source>
<other>
<title>Catalogue of 20554 Faint Stars in the Cape Astrographic Zone -40 to -52 Degrees
for the Equinox of 1900.0</title>
<author>
<initial>J</initial>
<initial>H</initial>
<lastName>Spencer</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Jackson</lastName>
</author>
<name>His Majesty's Stationery Office, London</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1939</year>
</date>
<bibcode>1939HMSO..C......0S</bibcode>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog contains positions, precessions, proper motions, and
photographic magnitudes for 20,554 stars. These were derived from
photographs taken at the Royal Observatory, Cape of Good Hope between 1923
and 1928. It covers the astrographic zones -40 degrees to -52 degrees of
declination. The positions are given for epoch 1900 (1900.0). It includes
spectral types for many of the stars listed. It extends the earlier
catalogs derived from the same plates to fainter magnitudes. The
computer-readable version consists of a single data table.</para>
<para>The stated probable error for the star positions is 0.024 seconds of time
(R.A.) and 0.25 seconds of arc (dec.) for stars with one determination,
0.017 seconds of time, and 0.18 seconds of arc for two determinations, and
0.014 / 0.15 for stars with three determinations.</para>
<para>The precession and secular variations were derived from Newcomb's constants.</para>
<para>The authors quote probable errors of the proper motions in both coordinates
of 0.008 seconds of arc for stars with one determination, 0.0055 seconds for
stars with two determinations, and 0.0044 for stars with three.</para>
<para>The photographic magnitudes were derived from the measured diameters on the
photographic plates and from the magnitudes given in the Cape Photographic
Durchmusterung.</para>
<para>The spectral classification of the cataloged stars was done with the
assistance of Annie Jump Cannon of the Harvard College Observatory.</para>
<para>The user should consult the source reference for more details of the
measurements and reductions. See also the notes in this document for
additional information on the interpretation of the entries.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="faint.dat">
<title>Data</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>ID</name>
<definition>Cape Number</definition>
<units>---</units>
</field>
<field>
<name>rem</name>
<definition>Remark
<footnote>
<para>A = Astrographic Star
F = Faint Proper Motion Star
N = Other Note</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>CPDZone</name>
<definition>Cape Phot. Durchmusterung (CPD) Zone
<footnote>
<para>All CPD Zones are negative. - signs are not included in data.
"0" in column 8 signifies Astrographic Plate instead of CPD.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>CPD</name>
<definition>CPD Number or Astrographic Plate
<footnote>
<para>See also note on CPDZone.
Astrographic plate listed "is the more southerly on which the
star occurs." Thus, y-coordinate is positive wherever possible.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>n_CPD</name>
<definition>[1234] Remarks
<footnote>
<para>A number from 1-4 appears in this byte for double stars where
the same CPD number applies to more than one star.</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>mpg</name>
<definition>Photographic Magnitude
<footnote>
<para>The Photographic Magnitude is "determined from the CPD Magnitude
and the diameter on the Cape Astrographic Plates by means of the
data given in the volume on the Magnitudes of Stars in the Cape
Zone Catalogue."
A null value (99.9) signifies a variable star.</para>
</footnote>
</definition>
<units>mag</units>
</field>
<field>
<name>RAh</name>
<definition>Mean Right Ascension hours 1900</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Mean Right Ascension minutes 1900</definition>
<units>min</units>
</field>
<field>
<name>RAs</name>
<definition>Mean Right Ascension seconds 1900</definition>
<units>s</units>
</field>
<field>
<name>DEd</name>
<definition>Mean Declination degrees 1900</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Mean Declination arcminutes 1900</definition>
<units>arcmin</units>
</field>
<field>
<name>DEs</name>
<definition>Mean Declination arcseconds 1900</definition>
<units>arcsec</units>
</field>
<field>
<name>N</name>
<definition>Number of Observations</definition>
<units>---</units>
</field>
<field>
<name>Epoch</name>
<definition>Epoch +1900</definition>
<units>yr</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper Motion in RA seconds of time</definition>
<units>s/a</units>
</field>
<field>
<name>pmRAas</name>
<definition>Proper Motion in RA arcseconds</definition>
<units>arcsec/a</units>
</field>
<field>
<name>pmDE</name>
<definition>Proper Motion in Dec arcseconds</definition>
<units>arcsec/a</units>
</field>
<field>
<name>Sp</name>
<definition>HD Spectral Type</definition>
<units>---</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>SSDOO/ADC</affiliation>
</creator>
<date>
<year>1996</year>
<month>Mar</month>
<day>26</day>
</date>
</ingest>
</history>
<identifier>I_6.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Proper Motions of 1160 Late-Type Stars</title>
<altname type="ADC">1014</altname>
<altname type="CDS">I/14</altname>
<altname type="brief">Proper Motions of 1160 Late-Type Stars</altname>
<reference>
<source>
<journal>
<title>Proper Motions of 1160 Late-Type Stars</title>
<author>
<initial>H</initial>
<initial>J</initial>
<lastName>Fogh Olsen</lastName>
</author>
<name>Astron. Astrophys. Suppl. Ser.</name>
<volume>2</volume>
<pageno>69</pageno>
<date>
<year>1970</year>
</date>
<bibcode>1970A&AS....2...69O</bibcode>
</journal>
</source>
<related>
<holding role="similar">II/38 : Stars observed photoelectrically by Dickow et al.
<xlink:simple href="II/38"/>
</holding>Fogh Olsen H.J. 1970, Astron. Astrophys. Suppl. Ser., 2, 69.
Fogh Olsen H.J. 1970, Astron. Astrophys., Suppl. Ser., 1, 189.</related>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>Improved proper motions for the 1160 stars contained in the photometric
catalog by Dickow et al. (1970) are presented. Most of the proper motions
are from the GC, transferred to the system of FK4. For stars not included
in the GC, preliminary AGK or SAO proper motions are given. Fogh Olsen
(Astron. Astrophys. Suppl. Ser., 1, 189, 1970) describes the method of
improvement. The mean errors of the centennial proper motions increase with
increasing magnitude. In Right Ascension, these range from 0.0043/cos(dec)
for very bright stars to 0.096/cos(dec) for the faintest stars. In Dec-
lination, the range is from 0.065 to 1.14.</para>
</description>
<details/>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="pmlate.dat">
<title>Proper motion data</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>No</name>
<definition>Number
<footnote>
<para>Henry Draper or Bonner Durchmusterung number</para>
</footnote>
</definition>
<units>---</units>
</field>
<field>
<name>pmRA</name>
<definition>Centennial Proper Motion RA</definition>
<units>s/ca</units>
</field>
<field>
<name>pmDE</name>
<definition>Centennial Proper Motion Dec</definition>
<units>arcsec/ca</units>
</field>
<field>
<name>RV</name>
<definition>Radial Velocity</definition>
<units>km/s</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Julie Anne Watko</lastName>
<affiliation>ADC</affiliation>
</creator>
<date>
<year>1996</year>
<month>Jun</month>
<day>03</day>
</date>
</ingest>
</history>
<identifier>I_14.xml</identifier>
</dataset>
<dataset subject="astronomy" xmlns:xlink="http://www.w3.org/XML/XLink/0.9">
<title>Katalog von 3356 Schwachen Sternen fuer das Aequinoktium 1950
+89 degrees</title>
<altname type="ADC">1016</altname>
<altname type="CDS">I/16</altname>
<altname type="brief">Catalog of 3356 Faint Stars, 1950</altname>
<reference>
<source>
<other>
<title>Katalog von 3356 Schwachen Sternen fuer das Aequinoktium 1950
+89 degrees</title>
<author>
<initial>J</initial>
<lastName>Larink</lastName>
</author>
<author>
<initial>A</initial>
<lastName>Bohrmann</lastName>
</author>
<author>
<initial>H</initial>
<lastName>Kox</lastName>
</author>
<author>
<initial>J</initial>
<lastName>Groeneveld</lastName>
</author>
<author>
<initial>H</initial>
<lastName>Klauder</lastName>
</author>
<name>Verlag der Sternwarte, Hamburg-Bergedorf</name>
<publisher>???</publisher>
<city>???</city>
<date>
<year>1955</year>
</date>
<bibcode>1955</bibcode>
</other>
</source>
</reference>
<keywords parentListURL="http://messier.gsfc.nasa.gov/xml/keywordlists/adc_keywords.html">
<keyword xlink:href="Fundamental_catalog.html">Fundamental catalog</keyword>
<keyword xlink:href="Positional_data.html">Positional data</keyword>
<keyword xlink:href="Proper_motions.html">Proper motions</keyword>
</keywords>
<descriptions>
<description>
<para>This catalog of 3356 faint stars was derived from meridian circle
observations at the Bergedorf and Heidelberg Observatories. The
positions are given for the equinox 1950 on the FK3 system. The stars
are mainly between 8.0 and 10.0 visual magnitude. A few are brighter
than 8.0 mag. The lower limit in brightness resulted from the visibility
of the stars.</para>
</description>
<details>
<para>All stars were observed at both the Heidelberg and Bergedorf
Observatories. Normally, at each observatory, two observations were
obtained with the clamp east and two with the clamp west. The mean
errors are comparable for the two observatories with no significant
systematic difference in the positions between them. The mean errors of
the resulting positions should be approximated 0.011s/cos(dec) in right
ascension and ).023" in declination.</para>
<para>The proper motions were derived from a comparison with the catalog
positions with the positions in the AGK2 and AGK2A with a 19 year
baseline and from a comparison of new positions with those in Kuestner
1900 with about a fifty year baseline.</para>
<para>The magnitudes were taken from the AGK2. Most spectral types were
determined by A. N. Vyssotsky. A few are from the Bergedorfer
Spektraldurchmusterung.</para>
</details>
</descriptions>
<tableHead>
<tableLinks>
<tableLink xlink:href="catalog.dat">
<title>The catalog</title>
</tableLink>
</tableLinks>
<fields>
<field>
<name>ID</name>
<definition>Catalog number</definition>
<units>---</units>
</field>
<field>
<name>DMz</name>
<definition>BD zone</definition>
<units>---</units>
</field>
<field>
<name>DMn</name>
<definition>BD number</definition>
<units>---</units>
</field>
<field>
<name>mag</name>
<definition>Photographic magnitude</definition>
<units>mag</units>
</field>
<field>
<name>Sp</name>
<definition>Spectral class</definition>
<units>---</units>
</field>
<field>
<name>RAh</name>
<definition>Right Ascension hours (1950)</definition>
<units>h</units>
</field>
<field>
<name>RAm</name>
<definition>Right Ascension minutes (1950)</definition>
<units>min</units>
</field>
<field>
<name>RAs</name>
<definition>Right Ascension seconds (1950)</definition>
<units>s</units>
</field>
<field>
<name>Pr-RA1</name>
<definition>First order precession in RA per century</definition>
<units>0.01s/a</units>
</field>
<field>
<name>Pr-RA2</name>
<definition>Second order precession in RA per century</definition>
<units>0.0001s2/a2</units>
</field>
<field>
<name>pmRA</name>
<definition>Proper motion in RA from AGK2 positions</definition>
<units>0.01s/a</units>
</field>
<field>
<name>pmRA2</name>
<definition>Proper motion in RA from Kuestner positions</definition>
<units>0.01s/a</units>
</field>
<field>
<name>DE-</name>
<definition>Sign of declination (1950)</definition>
<units>---</units>
</field>
<field>
<name>DEd</name>
<definition>Declination degrees (1950)</definition>
<units>deg</units>
</field>
<field>
<name>DEm</name>
<definition>Declination minutes (1950)</definition>
<units>arcmin</units>
</field>
<field>
<name>DEs</name>
<definition>Declination seconds (1950)</definition>
<units>arcsec</units>
</field>
<field>
<name>Pr-de1</name>
<definition>First order precession in dec per century</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>Pr-de2</name>
<definition>Second order precession in dec per century</definition>
<units>arcsec2/ha2</units>
</field>
<field>
<name>pmdec</name>
<definition>Proper motion in DE from AGK2 positions</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>pmdec2</name>
<definition>Proper motion in DE from Kuestner positions</definition>
<units>arcsec/ha</units>
</field>
<field>
<name>epoch</name>
<definition>Epoch of observation - 1900.0</definition>
<units>yr</units>
</field>
<field>
<name>rem</name>
<definition>Note for star in printed catalog
<footnote>
<para>1 = ma (blend?)
3 = pr (preceding)
4 = seq (following)
5 = bor (northern)
6 = au (southern)
* = other note in printed volume (All notes in the printed volume have not
been indicated in this version.)
the printed volume sometimes has additional information on the systems with
numerical remarks.</para>
</footnote>
</definition>
<units>---</units>
</field>
</fields>
</tableHead>
<history>
<ingest>
<creator>
<lastName>Nancy Grace Roman</lastName>
<affiliation>ADC/SSDOO</affiliation>
</creator>
<date>
<year>1996</year>
<month>Feb</month>
<day>01</day>
</date>
</ingest>
</history>
<identifier>I_16.xml</identifier>
</dataset>
</datasets>
If you have XML alone in RDD[String] format,
you can convert it to DataFrame with Databricks utility class:
com.databricks.spark.xml.XmlReader#xmlRdd
You could use SGML for parsing your text file using SGML's SHORTREF feature for parsing mixed CSVs like yours and Wiki syntaxes. With SHORTREF you declare text tokens to be replaced into other text (typically start- and end-element tags).
<DOCTYPE data [
<!ELEMENT data O O (field+)>
<!ELEMENT field O O (#PCDATA|markup)>
<!ELEMENT markup O O (row)>
<!ELEMENT row - - (ab+)>
<!ELEMENT ab - - (#PCDATA)>
<!ENTITY start-field "<field>">
<!SHORTREF in-data "^" start-field>
<!USEMAP in-data data>
<!ENTITY start-markup "<markup>">
<!ENTITY end-markup "</markup>">
<!SHORTREF in-field "`" start-markup>
<!USEMAP in-field field>
<!SHORTREF in-markup "`" end-markup>
<!USEMAP in-markup markup>
]>
1234^12^999^`<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>`^23232
Parsing this using SGML will result in the following
<data>
<field>1234</field>
<field>12</field>
<field>999</field>
<field>
<markup>
<row>
<ab key="someKey" value="someValue"/>
<ab key="someKey1" value="someValue1"/>
</row>
</markup>
</field>
<field>23232</field>
</data>
The SHORTREF and USEMAP declarations tell SGML to treat a caret character as a start-element tag for <field> when in data child content, and to treat a backtick character as start-element tag for markup when in field child content. When in markup child content, another backtick character ends the markup element.
SGML will also infer omitted start- and end-element tags based on O omission indicators and content model rules.
Edit: to make this work without changing your data file (datafile.csv, say), instead of including the content verbatim into the master SGML file, declare and place an entity reference to it like this:
<!DOCTYPE data [
<!-- ... same declarations as above ... -->
<ENTITY datafile SYSTEM "datafile.csv">
]>
&datafile
SGML will pull the content of datafile.csv into the datafile entity and replace the &datafile entity reference with the file content.
I tried parsing mentioned data without using xplode (dataframe) in RDD level. Please suggest any improvements.
Read the data as text file and define a schema
split string using delimiter ^
filter out bad records which don't confer to schema
match the data against the schema defined earlier.
Now you will have data like below in a tuple and we are left to parse the middle xml data.
(1234,12,999,"<row><ab key="someKey" value="someValue"/><ab key="someKey1" value="someValue1"/></row>, 23232)
xml.attribute("key") as it will either return all the keys.
if you need value someValue and not interested in someValue1, then loop through this node sequence and apply filter of contains("key") to eliminate other keys. I have used key Duration that was present in the data.
apply xpath \"#value" on previous step to get value.
similar question in cloudera
//define a case class for schema match with data input
case class stb (server_unique_id:Int,request_type:Int,event_id:Int,stb_timestamp:String,stb_xml:String,device_id:String,secondary_timestamp: String)
val data = spark.read.textFile(args(0)).rdd;///read data from supplied path from CLI
//check for ^ delimiter and 7 fields, else filter out
var clean_Data = data.filter { line => {line.trim().contains("^")}}
.map { line => {line.split("\\^")}}
.filter{ line => line.length == 7}
//match the schema and filter out data having event id = 100 and the tag having Duration
var tup_Map = clean_Data.map{ line => stb (line(0).toInt,line(1).toInt,line(2).toInt,line(3),line(4),line(5),line(6))}
.filter(line => (line.event_id == 100 && line.stb_xml.contains("Duration")));
//xml is of name-value format, hence the attrbutes are all same(n,v)
//parse through the xml structure and find out necessary data
//xmlnv will parse top level to nodeseq having 8 different data like duration,channel in self closing tags
//and name-value format
var xml_Map = tup_Map.map{line =>
var xmld = XML.loadString(line.stb_xml);
var xmlnv = xmld \\ "nv";
var duration = 0;
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("Duration") } duration = (xmlnv(i) \\ "#v").text.toInt;
var channelNum = 0;
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("ChannelNumber") } channelNum = (xmlnv(i) \\ "#v").text.toInt;
var channelType = "";
for { i <- 0 to xmlnv.length-1 if xmlnv(i).attributes.toString().contains("ChannelType") } channelType = (xmlnv(i) \\ "#v").text;
(duration, channelNum, channelType,line.device_id)
}
//persist xml_Map for further operations
xml_Map.persist();

How to set default value for a column of content type in Sharepoint

i create a content type using feature as below
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
<Field ID ="{4C939423-2090-413d-B241-724D9B66F74B}"
Name="VersionNumer"
DisplayName="Version Number"
Type="Text"
Required="TRUE"
Group="CT" >
<Default>0</Default>
</Field>
<Field ID ="{33E51B7A-FEE2-4995-B4BB-9F3F909C1015}"
Name="DocumentType"
DisplayName="Document Type"
Type="Choice"
Required="TRUE"
Group="CT">
<Default>Other</Default>
<CHOICES>
<CHOICE>Document</CHOICE>
<CHOICE>Excel</CHOICE>
<CHOICE>PowerPoint</CHOICE>
<CHOICE>Other</CHOICE>
</CHOICES>
</Field>
<ContentType ID="0x0101000728167cd9c94899925ba69c4af6743e"
Name="myCT"
Group="myCT"
Description="myCT"
Version="0">
<FieldRefs>
<FieldRef ID="{4C939423-2090-413d-B241-724D9B66F74B}" Name="VersionNumber" DisplayName="Version Number" Required="TRUE" />
<FieldRef ID="{33E51B7A-FEE2-4995-B4BB-9F3F909C1015}" Name="DocumentType" DisplayName="Document Type" Required="TRUE" />
</FieldRefs>
</ContentType>
</Elements>
How can i set default value for VersionNumer is 0 and default value for DocumentType is Other? I used default tag but it was not efficent.
And another question, how to force user to input VersionNumer and DocumentType. I used atrtibute required="true" but it was not successful.
Thanks in advance.
I've tried this in my environment, it works perfectly. I copy 'n' pasted the contents of elements.xml and didn't make a single modification.
Try this:
Delete your existing site columns and content type (in that order)
Deactivate your feature
IISRESET
Activate your feature again and check if the default values are ok, they should be
Add your choices field and after that put the default tag
enter code here
<CHOICES>
<CHOICE>Document</CHOICE>
<CHOICE>Excel</CHOICE>
<CHOICE>Other</CHOICE>
</CHOICES>
<Default>Other</Default>

Resources