Import Backed-up DB from .file format into SQL Server Express - sql-server-2014-express

I received a .file which I need to import into my Local SQL Server Express 2014.
I can't find out how to import this file format.
Any suggestion?

Just find the solution:
I changed file format to .bak and then restored DataBase. It worked for me.

Related

unable to read configfile using Configparser in Databricks

I want to read some values as a parameter using configparser in Databricks
i can import configparser module in databricks but unable to read the parameters from configfile its coming up error as KEY ERROR
please check the below screenshot
config file is
The problem is that your file is located on DBFS (the /FileStore/...) and this is file system isn't understood by configparser that works with "local" file system. To get this working, you need to append the /dbfs prefix to file path: /dbfs/FileStore/....
P.S. it may not work on community edition with DBR 7.x. In this case, just copy this config file before reading using the dbutils.fs.cp, like this :
dbutils.fs.cp("/FileStore/...", "file:///tmp/config.ini")
config.read("/tmp/config.ini")

Matplotlib created a temporary config/cache directory

Matplotlib created a temporary config/cache directory at /var/www/.config/matplotlib because the default path (/tmp/matplotlib-b33qbx_v) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
This is the message I'm getting in error.log file and 504 Gateway Time out error on browser .
Someone please help to resolve this issue.
Please check:
https://github.com/pyinstaller/pyinstaller/issues/617
I run matplotlib from the webserver and use:
os.environ['MPLCONFIGDIR'] = '/opt/myapplication/.config/matplotlib'
This dir should writable by the web server (e.g. www-data).
import os
os.environ['MPLCONFIGDIR'] = os.getcwd() + "/configs/"
before
import matplotlib
works for me

File Not Found Error when trying to read csv file in Jupyter Notebook

I am trying to open a csv file and Jupyter keeps throwing errors. I am using the full path and still no luck. GA.csv is the name of the file and it is saved to my desktop.
My code:
import pandas as pd
df = pd.read_csv("/Users⁩/⁨nicholasgoodman/Desktop/GA.csv")
When I run this, I get the error message below. I've tried moving the file, I'm sure this is the correct directory, and this method for opening the file has worked in the past for me.
FileNotFoundError: [Errno 2] File
b'/users\xe2\x81\xa9/\xe2\x81\xa8nicholasgoodman/desktop/GA.csv'
does not exist:
b'/users\xe2\x81\xa9/\xe2\x81\xa8nicholasgoodman/desktop/GA.csv'
Are you working on Windows? Then try this with your path.
import pandas as pd
df = pd.read_csv('C:/Users/laman/Downloads/test.csv')
df.head(5)
I got the result.
col1 col2
0 test1 test2
1 1111 2222
2 3333 4444
You may also try not to use absolute path but relative, depending on your Jupyter file locates.
df = pd.read_csv('../../Download/test.csv')
If you copy the path from somewhere, then just type it again. You could copy invisible characters that is not allowed in your path somehow.

Creating a Spark RDD from a file located in Google Drive using Python on Colab.Research.Google

I have been successful in running Python 3 / Spark 2.2.1 program in Google's Colab.Research platform :
!apt-get update
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz
!tar xf spark-2.2.1-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.2.1-bin-hadoop2.7"
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
this works perfectly when I uploaded text files from my local computer to the Unix VM using
from google.colab import files
datafile = files.upload()
and read them as follows :
textRDD = spark.read.text('hobbit.txt').rdd
so far so good ..
My problem starts when I am trying to read a file that is lying in my Google drive colab directory.
Following instructions I have authenticated user and created a drive service
from google.colab import auth
auth.authenticate_user()
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
after which I have been able to access the file lying in the drive as follows :
file_id = '1RELUMtExjMTSfoWF765Hr8JwNCSL7AgH'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
Downloaded file contents are: b'The king beneath the mountain\r\nThe king of ......
even this works perfectly ..
downloaded.seek(0)
print(downloaded.read().decode('utf-8'))
and gets the data
The king beneath the mountain
The king of carven stone
The lord of silver fountain ...
where things FINALLY GO WRONG is where I try to grab this data and put it into a spark RDD
downloaded.seek(0)
tRDD = spark.read.text(downloaded.read().decode('utf-8'))
and I get the error ..
AnalysisException: 'Path does not exist: file:/content/The king beneath the mountain\ ....
Evidently, I am not using the correct method / parameters to read the file into spark. I have tried quite a few of the methods described
I would be very grateful if someone can help me figure out how to read this file for subsequent processing.
A complete solution to this problem is available in another StackOverflow question that is available at this URL.
Here is the notebook where this solution is demonstrated.
I have tested it and it works!
It seems that spark.read.text expects a file name. But you give it the file content instead. You can try either of these:
save it to a file then give the name
use just downloaded instead of downloaded.read().decode('utf-8')
You can also simplify downloading from Google Drive with pydrive. I gave an example here.
https://gist.github.com/korakot/d56c925ff3eccb86ea5a16726a70b224
Downloading is just
fid = drive.ListFile({'q':"title='hobbit.txt'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('hobbit.txt')

File downloaded larger than original

I'm working on a little python3 server and I want to download a sqlite database from this server. But when I tried that, I discovered that the downloaded file is larger than the original : the original file size is 108K, the downloaded file size is 247K. I've tried this many times, and each time I had the same result. I also checked the sum with sha256, which have different results.
Here is my downloader.py file :
import cgi
import os
print('Content-Type: application/octet-stream')
print('Content-Disposition: attachment; filename="Library.db"\n')
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
print(file.read())
Thanks in advance !
EDIT :
I tried that :
$ ./downloader > file
file's size is also 247K.
Well, I've finally found the solution. The problem (which I didn't see first) was that the server sent plain text to client. Here is one way to send binary data :
import cgi
import os
import shutil
import sys
print('Content-Type: application/octet-stream; file="Library.db"')
print('Content-Disposition: attachment; filename="Library.db"\n')
sys.stdout.flush()
db = os.path.realpath('..') + '/Library.db'
with open(db,'rb') as file:
shutil.copyfileobj(file, sys.stdout.buffer)
But if someone has a better syntax, I would be glad to see it ! Thank you !

Resources