How to get the blob data from Oracle in C#4.0? - c#-4.0

Here i have a requriment to read the data from Oracle DB.In that one column is defined as BLOB.using that data i need to frame the insert query like this "insert into emp values('100','John',EMP_PIC);
Here emp_pic is defined as BLOB.Please suggest me some idea's about this.I am using C#4.0.

perhaps you can use this sample project based on this link. I hope this help.
http://www.codeproject.com/Articles/13365/Insert-retrieve-an-image-into-from-a-blob-field-in
if you wanna get value from blob data using OracleDataReader just convert first byte to image using this:
private Image byteArrayToImage(byte[] byteArrayIn)
{
MemoryStream ms = new MemoryStream(byteArrayIn);
ms.Position = 0;
Image returnImage = Image.FromStream(ms);
return returnImage;
}
then read data blob like this:
picFileData.Image = byteArrayToImage(dr["EMP_PIC"] as byte[]); // dr is OracleDataReader dr;
picFileData is PictureBox from visual studio and EMP_PIC is blob column in Oracle

Try to use LINQ to SQL. This is very useful.

Related

Azure Cognitive Search int to Edm.String field issues

I'm having trouble trying to added data to my Azure Cognitive Search index. The data is being read from SQL Server tables with a python script. The script sends it to the index using the SearchIndexClient from the azure search sdk.
The problem is when sending Python "int" values into a search index field of type Edm.String. The link below seems to indicate that this should be possible. Any number type is allowed to go into a Edm.String.
https://learn.microsoft.com/en-us/rest/api/searchservice/data-type-map-for-indexers-in-azure-search#bkmk_sql_search
However I get this error:
Cannot convert the literal '0' to the expected type 'Edm.String'.
Am I misunderstanding the docs? Is the python int different than the SQL Server int through the Azure Search SDK?
I'm using pyodbc to connect to an Azure Synapse db. Retrieving the rows with cursor loop. This is basically what I'm doing...
search_client = SearchIndexClient(env.search_endpoint,
env.search_index,
SearchApiKeyCredential(env.search_api_key),
logging_enable=True)
conn = pyodbc.connect(env.sqlconnstr_synapse_connstr, autocommit=True)
query = f"SELECT * FROM [{env.source_schema}].[{source_table}]"
cursor = conn.cursor()
cursor.execute(query)
source_table_columns = [source_table_column[0] for source_table_column in cursor.description]
rows = []
for source_table_values in cursor.fetchmany(MAX_ROWS_TO_FETCH):
source_table_row = dict(zip(source_table_columns,
source_table_values))
rows.append(source_table_row)
upload = search_client.upload_documents(documents=rows)
If the row contains a row with an int value and the search index table field is Edm.String, we get the error.
Cannot convert the literal '0' to the expected type 'Edm.String'.
Thank you for providing the code snippet. The data type mapping link is applicable when using an Indexer to populate an Index.
Indexers provide a convenient mechanism to load documents into an Index from a source datasource. They perform the mapping outlined here by default or can take in an optional fieldMappings.
In the case of the code snippet where an index is being updated manually, when there is a type mismatch between source & target, that would be handled by casting/converting etc. by the user. In the code snippet after you have the dictionary, you can convert the int into a string using str() before uploading the batch in to the Index
source_table_row[column_name] = str(source_table_row[column_name])
This is a python sample that creates an indexer to update an index

Azure FHIR: Get RawResource in Plain Text

I've just started my research on the "Azure FHIR SQL Server Version".
I had some issues trying to get the Json Resource in plain text, since It is stored compressed in the database (as shown in the following lines):
select r.RawResource from dbo.Resource r where r.IsHistory=0 and r.IsDeleted=0;
RAWRESOURCE
0x1F8B080000000000000A8492CB4EC2501086E7519AAE344122F7E04A4274618C90C8CEB8282D60136E2985A08477F79B39A71513D034E77466CECC3FFF5C0E124A2613D9C84AB64831F2483E65CD3F943BCE00EB4C22594A2A5FFC73FE2BB4502A9C5412EF579786B4898AA4234DEE1BB99531783152137B225DA431D6485A481DF408FB546AC8131FD9F0B839FA9E5BB10FDC1B64CDBD4572F966782C3999D9153F9463C949DF94E994A33E1AF366483BFCE7E014F5D561D4E2AB733A76B7398AF56E68111528D2CE47E4A069B4BE2D795D94487D7053EB538C3D300E01D608CEAABF4A0FCDD5A71C527B71CCFE8B0E3D1BAD74CE8999C1E2A4AA0D33D31E4DBC3564821F362765573953F71575D7E49A1C4D3EED429BBBEBBB781E53A50886F30B982F641C59C7356F1F2DB3ED5AC93DF32A62AB25FBCB99A6F8EEFFC8129CE409E4D17BFFCC2CE1737B5D7458F3B80E3B1CED790FEC2AF14F44ECFCA60432B49D4A2CCB83E7159035C352B50D69A10FE0A8B10DEB63319F18949C6A1CD36734ADDAD5B126DEEDF1DC7AA35BFADB2F00BDE3BDB5475BDBE2ACC41B1AC3ADAFF428DF000000FFFF
I tried different ways to get it, however no one was successful.
select cast(r.RawResource as varchar(max)) VarcharResource,
CONVERT(varchar(max), r.RawResource, 0) VarcharResource2
from dbo.Resource r where r.IsHistory=0 and r.IsDeleted=0;
VarcharResource2
‹ „’ËNÂP†çQš®4A""÷àJBtaŒÈθ(-`n)… „w÷›9§Ð4çtfÎÌ?ÿ\J&ÙÈJ¶H1òH>eÍ?”;Î ëL""YJ*_üsþ+´P*œTïW—†´‰Š¤#M1x1R{""]¤1ÖHZHôûTjÈÙð¸9úž[±ÜdͽErùfx,9™Ù?”cÉIߔ锣>ófH;üçàõÕaÔâ«s:v·9Šõnh(ÒÎGä i´¾-y]”H}pSëSŒ=0ÖΪ¿JÍÕ§R{qÌþ‹=­tΉ™Á⤪3ÓM¼5d‚6'eW9S÷u×äšM>íB›»ë»xS¥†ó˜/dYÇ5o-³íZÉ=ó*b«%ûË™¦øîÿÈœä äÑ{ÿÌ,ás{]tXó¸;íyì*ñODìü¦2´J,˃ç5ÃRµi¡à¨±ëc1Ÿ”œjÓg4­ÚÕ±&ÞíñÜz£[úÛ/ ½ã½µG[Ûâ¬Äí¯ô(ß ÿÿ
Anybody knows the correct way to get back the Json in plain text?
Thanks
The resources are Gzipped, so something like:
string rawResource;
using (rawResourceStream)
using (var gzipStream = new GZipStream(rawResourceStream, CompressionMode.Decompress))
using (var reader = new StreamReader(gzipStream, ResourceEncoding))
{
rawResource = await reader.ReadToEndAsync();
}

How to render JSON using Stream Analytics Query

I have Inputs in the form of JSON stored in Blob Storage
I have Output in the form of SQL Azure table.
My wrote query and successfully moving value of specific property in JSON to corresponding Column of SQL Azure table.
Now for one column I want to copy entire JSON payload as Serialized string in one sql column , I am not getting proper library function to do that.
SELECT
CASE
WHEN GetArrayLength(E.event) > 0
THEN GetRecordPropertyValue(GetArrayElement(E.event, 0), 'name')
ELSE ''
END AS EventName
,E.internal.data.id as DataId
,E.internal.data.documentVersion as DocVersion
,E.context.custom As CustomDimensionsPayload
Into OutputTblEvents
FROM InputBlobEvents E
This CustomDimensionsPayload should be a JSON actually
I made a user defined function which did the job for me:
function main(InputJSON) {
var InputJSONString = JSON.stringify(InputJSON);
return InputJSONString;
}
Then, inside the Query, I used the function like this:
SELECT udf.ConvertToJSONString(COLLECT()) AS InputJSON
INTO outputX
FROM inputY
You need to just reference the input object itself instead of COLLECT() if you want the entire payload to be converted. I was trying to do this also so figured I'd add what i did.
I used the same function suggested by PerSchjetne, query then becomes
SELECT udf.JSONToString(IoTInputStream)
INTO [SQLTelemetry]
FROM [IoTInputStream]
Your output will now be the full JSON string, including all the metadata extras that IOT hub adds on.

Can I append Avro serialized data to an existing Azure blob?

I am asking if I can, but I would also like to know if I should.
Here's my scenario:
I am receiving Avro serialized messages in small batches. I want to store them for later analysis using a Hive table with the Avro SerDe. I'm running in Azure, and I am storing the messages in a blob.
I am trying to avoid having lots of small blobs (because I believe this will have a negative impact on Hive). If I have the Avro header already written to the blob, I believe that can append Avro data blocks with CloudBlockBlob.PutBlockAsync(). (As long, as I know the sync marker.)
However, I've examined two .NET libraries and that don't seem to support my approach. (I have to write the entire Avro container file at once).
http://www.nuget.org/packages/Apache.Avro/
http://www.nuget.org/packages/Microsoft.Hadoop.Avro/
Am I taking the correct approach?
Am I missing something in the libraries?
My question is similiar (but different) to this one:
Can you append data to an existing Avro data file?
The short answer here is that I was trying to do the wrong thing.
First, we decided that Avro is not the appropriate format for the on-the-wire serialization. Primarily, because Avro expects the schema definition to be present in every Avro file. This adds a lot of weight to what is trasmitted. You could still use Avro, but that's not what it's designed for. (It is designed for big files on HDFS.)
Secondly, the existing libraries (for .NET) only support appending to Avro files via a stream. This does not map well to Azure block blobs (you don't want to open a block blob as a stream).
Thirdly, even if these first two could be bypassed, all of the items in a single Avro file are expected to share the same schema. We had a set of heterogenous items flowing in that we wanted to buffer, batch, and write to blob. Trying to segregate the items by type/schema as we were writing them to blob added lots of complication. In the end, we opted to use JSON.
It is possible to do.
First of all, you have to use CloudAppendBlob:
CloudAppendBlob appBlob = container.GetAppendBlobReference(
string.Format("{0}{1}", date.ToString("yyyyMMdd"), ".log"));
appBlob.AppendText(
string.Format(
"{0} | Error: Something went wrong and we had to write to the log!!!\r\n",
dateLogEntry.ToString("o")));
Second step is to tell to avro lib not to write header on append and share the same sync marker between appends:
var avroSerializer = AvroSerializer.Create<Object>();
using (var buffer = new MemoryStream())
{
using (var w = AvroContainer.CreateWriter<Object>(buffer, Codec.Deflate))
{
Console.WriteLine("Init Sample Data Set...");
var headerField = w.GetType().GetField("header", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
var header = headerField.GetValue(w);
var marker = header.GetType().GetField("syncMarker", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
marker.SetValue(header, new byte[16]);
using (var writer = new SequentialWriter<Object>(w, 24))
{
// Serialize the data to stream by using the sequential writer
for (int i = 0; i < 10; i++)
{
writer.Write(new Object());
}
}
}
Console.WriteLine("Append Sample Data Set...");
//Prepare the stream for deserializing the data
using (var w = AvroContainer.CreateWriter<Object>(buffer, Codec.Deflate))
{
var isHeaderWritten = w.GetType().GetField("isHeaderWritten", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
isHeaderWritten.SetValue(w, true);
var headerField = w.GetType().GetField("header", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
var header = headerField.GetValue(w);
var marker = header.GetType().GetField("syncMarker", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);
marker.SetValue(header, new byte[16]);
using (var writer = new SequentialWriter<Object>(w, 24))
{
// Serialize the data to stream by using the sequential writer
for (int i = 10; i < 20; i++)
{
writer.Write(new Object());
}
}
}
Console.WriteLine("Deserializing Sample Data Set...");
}

Parse attribute of Media Element

I want to parse url attribute from the XML and show image in image control (the one reffered to by the URL) in listbox from the following feed link: http://feeds.bbci.co.uk/news/rss.xml
My code is:
var ComingNewsFromUri = from rss in XElement.Parse(e.Result).Descendants("item")
select new NewsItems
{
Title = rss.Element("title").Value,
PubDate = rss.Element("pubDate").Value,
Description = rss.Element("description").Value
};
For RSS, I would recommend using SyndicationFeed and SyndicationItem....does all the parsing and converting to objects automatically and brilliantly for you.
http://ryanhayes.net/blog/how-to-build-an-rss-feed-reader-in-windows-phone-7part-i-retrieving-parsing-and-displaying-post-titles/
I have an RSS feed app on the store myself using SyndicationFeed and it is very reliable and convenient.
Here is another sample by Microsoft
http://code.msdn.microsoft.com/wpapps/RSS-Reader-Sample-1702775f

Resources