Agora - unable to merge video .ts files into one single video file - python-3.x

I am using agora.io for video calling. I am running script on my localhost.
I am able to record the video successfully but they are multiple .ts files.
I downloaded python script from agora website and ran it. It runs successfully without any error But it does not generate any single video file, in short script run successfully but nothing happens.
No errors, no new file generated.
The code I am using is:
#!/usr/bin/env python
import time
import re
import os
import sys
import signal
import glob
import parser_metadata_files
import video_convert
from optparse import OptionParser
import traceback
if '__main__' == __name__:
import sys
signal.signal(signal.SIGINT, signal.SIG_IGN)
signal.signal(signal.SIGQUIT, signal.SIG_IGN)
parser = OptionParser()
parser.add_option("-f", "--folder", type="string", dest="folder", help="Convert folder", default="")
parser.add_option("-m", "--mode", type="int", dest="mode", help="Convert merge mode, \
[0: txt merge A/V(Default); 1: uid merge A/V; 2: uid merge audio; 3: uid merge video]", default=0)
parser.add_option("-p", "--fps", type="int", dest="fps", help="Convert fps, default 15", default=15)
parser.add_option("-s", "--saving", action="store_true", dest="saving", help="Convert Do not time sync",
default=False)
parser.add_option("-r", "--resolution", type="int", dest="resolution", nargs=2,
help="Specific resolution to convert '-r width height' \nEg:'-r 640 360'", default=(0, 0))
(options, args) = parser.parse_args()
if not options.folder:
parser.print_help()
parser.error("Not set folder")
try:
print('1')
os.environ['FPSARG'] = "%s" % options.fps
print('2')
parser_metadata_files.cmds_parse(["dispose", options.folder])
print('3')
video_convert.do_work()
print('4')
parser_metadata_files.cmds_parse(["clean", options.folder])
print('5')
except Exception as e:
traceback.print_exc()
The command I am running is:
/usr/local/bin/python3.7 convert.py -f /Users/msmexmac/Desktop/Cloud_Recording_tools/tiles/ -m 3 -p 30
I downloaded the script from this page.

The reason you see multiple .ts files is because, after the recording starts, the Agora server automatically splits the recorded content into multiple TS/WebM files and keeps uploading them to the third-party cloud storage until the recording stops.
Make sure to follow the steps in the below-given link for uploading the recorded video:
https://docs.agora.io/en/cloud-recording/cloud_recording_rest
It is crucial to get the "uploaded" callback to proceed further.

Related

How to catch an exception and get exception type in Python?

[Question] I need user to provide GDrive link to their kaggle.json file so I can set it up, when the script is run first time without kaggle.json file it throws an error and I am trying to handle it but without any success, my first question is same as title of this post, and second is does this make any sense all of this? Is there a better way to do this?
[Background] I am trying to write a script that acts as an interface providing limited access to functionalities of Kaggle library so that it runs in my projects and still being able to share it on GitHub so that others can use it in similar projects, I will bundle this along with configuration management tool or with shell script.
This is the code:
#!/usr/bin/env python3
import os
import sys
import traceback
import gdown
import kaggle
import argparse
"""
A wrapper around the kaggle library that provides limited access to the kaggle library
"""
#*hyperparameters
dataset = 'roopahegde/cryptocurrency-timeseries-2020'
raw_data_folder = './raw'
kaggle_api_file_link = None
#*argument parser
parser = argparse.ArgumentParser(description="download dataset")
parser.add_argument('--kaggle_api_file', type=str, default=kaggle_api_file_link, help="download and set kaggle API file [Gdrive link]")
parser.add_argument("--kaggle_dataset", type=str,
default=dataset, help="download kaggle dataset using user/datasets_name")
parser.add_argument("--create_folder", type=str, default=raw_data_folder, help="create folder to store raw datasets")
group = parser.add_mutually_exclusive_group()
group.add_argument('-preprocess_folder', action="store_true", help="create folder to store preprocessed datasets")
group.add_argument('-v', '--verbose', action="store_true", help="print verbose output")
group.add_argument('-q', '--quiet', action="store_true", help="print quiet output")
args = parser.parse_args()
#*setting kaggle_api_file
if args.kaggle_api_file:
gdown.download(args.kaggle_api_file, os.path.expanduser('~'), fuzzy=True)
#*creating directories if not exists
if not os.path.exists(args.create_folder):
os.mkdir(args.create_folder)
if not os.path.exists('./preprocessed') and args.preprocess_folder:
os.mkdir('./preprocessed')
def main():
try:
#*downloading datasets using kaggle.api
kaggle.api.authenticate()
kaggle.api.dataset_download_files(
args.kaggle_dataset, path=args.create_folder, unzip=True)
kaggle.api.competition_download_files
#*output
if args.verbose:
print(
f"Dataset downlaoded from https://www.kaggle.com/{args.kaggle_dataset} in {args.create_folder}")
elif args.quiet:
pass
else:
print(f"Download Complete")
except Exception as ex:
print(f"Error occured {type(ex)} {ex.args} use flag --kaggle_api_file to download and set kaggle api file")
if __name__ == '__main__':
sys.exit(main())
I tried to catch IOError and OSError instead of catching generic Exception, still no success. I want to print a message telling user to use --kaggle_api_file flag to set up kaggle.json file.
This is the error:
python get_data.py
Traceback (most recent call last):
File "get_data.py", line 7, in <module>
import kaggle
File "/home/user/.local/lib/python3.8/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/home/user/.local/lib/python3.8/site-packages/kaggle/api/kaggle_api_extended.py", line 164, in authenticate
raise IOError('Could not find {}. Make sure it\'s located in'
OSError: Could not find kaggle.json. Make sure it's located in /home/user/.kaggle. Or use the environment method.

JupyterLab 3: how to get the list of running servers

Since JupyterLab 3.x jupyter-server is used instead of the classic notebook server, and the following code does not list servers served with jupyter_server:
from notebook import notebookapp
notebookapp.list_running_servers()
None
What still works for the file/notebook name is:
from time import sleep
from IPython.display import display, Javascript
import subprocess
import os
import uuid
def get_notebook_path_and_save():
magic = str(uuid.uuid1()).replace('-', '')
print(magic)
# saves it (ctrl+S)
# display(Javascript('IPython.notebook.save_checkpoint();')) # Javascript Error: IPython is not defined
nb_name = None
while nb_name is None:
try:
sleep(0.1)
nb_name = subprocess.check_output(f'grep -l {magic} *.ipynb', shell=True).decode().strip()
except:
pass
return os.path.join(os.getcwd(), nb_name)
But it's not pythonic nor fast
How to get the current running server instances - and so e.g. the current notebook file?
Migration to jupyter_server should be as easy as changing notebook to jupyter_server, notebookapp to serverapp and changing the appropriate configuration files - the server-related codebase is largely unchanged. In the case of listing servers simply use:
from jupyter_server import serverapp
serverapp.list_running_servers()

DLL problem after compile with cx_freeze when I use win32wnet importation

I create a script that connect to network folder with specific username and password that I don't give to user. This script connecto to network folder for 1 or 2 secondes, do stuff and disconnect after that to be sure user can't access network folder after that.
I work fine in my developpement environment.
I user cx_Freeze to convert my .py to .exe ( I use it for other little program many times )
Problem is that the .exe file works fine only on the same PC where I develop my app. On all other PC it give me error : File " network.py" line 1, in ImportError: DLL load failed: Le module spécifié est introuvable ( in english, it can't find the specified module )
I try to add DLL of win32wnet. but not working.
What I do wrong.
See my code and my import code
'''
import win32wnet
import os
import re
# configure initial parameter
shareFolder = "\\\\ultra\\circuit-bgo"
usager = "foo"
motPasse = "foo"
# use win32wnet to create resorce to connect or disconnect
net_resource = win32wnet.NETRESOURCE()
net_resource.lpRemoteName = shareFolder
# try to disconnect to be sure no connection steel exist
try:
win32wnet.WNetCancelConnection2(net_resource.lpRemoteName,0,0)
except:
pass
# create connection to network folder
laConnection = win32wnet.WNetAddConnection2(net_resource, motPasse, usager, 0)
if os.path.exists(net_resource.lpRemoteName):
print("connection réussi")
# do some stuff, like read write and modify some files ( 1 or 2 secondes )
else:
print("connection ÉCHOUÉ")
# opps, connection failed
# disconnect to the network folder. I don't want user can access the folder by itself
try:
win32wnet.WNetCancelConnection2(net_resource.lpRemoteName,0,0)
except:
pass
'''
Import code with cx_freeze
'''
import os
import sys
from cx_Freeze import setup, Executable
base = None
if sys.platform == 'win32':
base = 'Win32GUI'
#syspath = "c:\\Python32\\Lib\\site-packages\\win32\\perfmondata.dll"
buildOptions = dict(
packages=['win32wnet'],
excludes=[],
include_files=['perfmondata.dll',]
)
executables = [Executable('network.py', base=base)]
setup(name='TestNetwork',
version='0.1',
options=dict(build_exe=buildOptions),
description='NetWork',
executables=executables
)
'''
and I try the basic code when I normaly compile with cx_freeze
this is a batch file:
cxfreeze.bat "c:/Python32/Scripts/network.py" --base-name=Win32GUI --target-dir C:/Python32/Scripts/Dist_Network --icon c:/Python32/Scripts/Logo.ico
After many, many, many test and reinstallation of python and module I need to find DLL that give me problem. I found that 3 DLL files have to be copy to the same folder of my new .exe file ( program ). The 3 DLL files are : pythoncom32.dll, pythoncomloader32.dll and pywintypes32.dll You can find this files on c:\Windows\syswow64 or system32 depend of your python installation (32 bit or 64 bit)
If you have bether solution, you can add it.
Thanks

Creating a Spark RDD from a file located in Google Drive using Python on Colab.Research.Google

I have been successful in running Python 3 / Spark 2.2.1 program in Google's Colab.Research platform :
!apt-get update
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz
!tar xf spark-2.2.1-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.2.1-bin-hadoop2.7"
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
this works perfectly when I uploaded text files from my local computer to the Unix VM using
from google.colab import files
datafile = files.upload()
and read them as follows :
textRDD = spark.read.text('hobbit.txt').rdd
so far so good ..
My problem starts when I am trying to read a file that is lying in my Google drive colab directory.
Following instructions I have authenticated user and created a drive service
from google.colab import auth
auth.authenticate_user()
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
after which I have been able to access the file lying in the drive as follows :
file_id = '1RELUMtExjMTSfoWF765Hr8JwNCSL7AgH'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
Downloaded file contents are: b'The king beneath the mountain\r\nThe king of ......
even this works perfectly ..
downloaded.seek(0)
print(downloaded.read().decode('utf-8'))
and gets the data
The king beneath the mountain
The king of carven stone
The lord of silver fountain ...
where things FINALLY GO WRONG is where I try to grab this data and put it into a spark RDD
downloaded.seek(0)
tRDD = spark.read.text(downloaded.read().decode('utf-8'))
and I get the error ..
AnalysisException: 'Path does not exist: file:/content/The king beneath the mountain\ ....
Evidently, I am not using the correct method / parameters to read the file into spark. I have tried quite a few of the methods described
I would be very grateful if someone can help me figure out how to read this file for subsequent processing.
A complete solution to this problem is available in another StackOverflow question that is available at this URL.
Here is the notebook where this solution is demonstrated.
I have tested it and it works!
It seems that spark.read.text expects a file name. But you give it the file content instead. You can try either of these:
save it to a file then give the name
use just downloaded instead of downloaded.read().decode('utf-8')
You can also simplify downloading from Google Drive with pydrive. I gave an example here.
https://gist.github.com/korakot/d56c925ff3eccb86ea5a16726a70b224
Downloading is just
fid = drive.ListFile({'q':"title='hobbit.txt'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('hobbit.txt')

ipython notebook --script deprecated. How to replace with post save hook?

I have been using "ipython --script" to automatically save a .py file for each ipython notebook so I can use it to import classes into other notebooks. But this recenty stopped working, and I get the following error message:
`--script` is deprecated. You can trigger nbconvert via pre- or post-save hooks:
ContentsManager.pre_save_hook
FileContentsManager.post_save_hook
A post-save hook has been registered that calls:
ipython nbconvert --to script [notebook]
which behaves similarly to `--script`.
As I understand this I need to set up a post-save hook, but I do not understand how to do this. Can someone explain?
[UPDATED per comment by #mobius dumpling]
Find your config files:
Jupyter / ipython >= 4.0
jupyter --config-dir
ipython <4.0
ipython locate profile default
If you need a new config:
Jupyter / ipython >= 4.0
jupyter notebook --generate-config
ipython <4.0
ipython profile create
Within this directory, there will be a file called [jupyter | ipython]_notebook_config.py, put the following code from ipython's GitHub issues page in that file:
import os
from subprocess import check_call
c = get_config()
def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py scripts"""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
c.FileContentsManager.post_save_hook = post_save
For Jupyter, replace ipython with jupyter in check_call.
Note that there's a corresponding 'pre-save' hook, and also that you can call any subprocess or run any arbitrary code there...if you want to do any thing fancy like checking some condition first, notifying API consumers, or adding a git commit for the saved script.
Cheers,
-t.
Here is another approach that doesn't invoke a new thread (with check_call). Add the following to jupyter_notebook_config.py as in Tristan's answer:
import io
import os
from notebook.utils import to_api_path
_script_exporter = None
def script_post_save(model, os_path, contents_manager, **kwargs):
"""convert notebooks to Python script after save with nbconvert
replaces `ipython notebook --script`
"""
from nbconvert.exporters.script import ScriptExporter
if model['type'] != 'notebook':
return
global _script_exporter
if _script_exporter is None:
_script_exporter = ScriptExporter(parent=contents_manager)
log = contents_manager.log
base, ext = os.path.splitext(os_path)
py_fname = base + '.py'
script, resources = _script_exporter.from_filename(os_path)
script_fname = base + resources.get('output_extension', '.txt')
log.info("Saving script /%s", to_api_path(script_fname, contents_manager.root_dir))
with io.open(script_fname, 'w', encoding='utf-8') as f:
f.write(script)
c.FileContentsManager.post_save_hook = script_post_save
Disclaimer: I'm pretty sure I got this from SO somwhere, but can't find it now. Putting it here so it's easier to find in future (:
I just encountered a problem where I didn't have rights to restart my Jupyter instance, and so the post-save hook I wanted couldn't be applied.
So, I extracted the key parts and could run this with python manual_post_save_hook.py:
from io import open
from re import sub
from os.path import splitext
from nbconvert.exporters.script import ScriptExporter
for nb_path in ['notebook1.ipynb', 'notebook2.ipynb']:
base, ext = splitext(nb_path)
script, resources = ScriptExporter().from_filename(nb_path)
# mine happen to all be in Python so I needn't bother with the full flexibility
script_fname = base + '.py'
with open(script_fname, 'w', encoding='utf-8') as f:
# remove 'In [ ]' commented lines peppered about
f.write(sub(r'[\n]{2}# In\[[0-9 ]+\]:\s+[\n]{2}', '\n', script))
You can add your own bells and whistles as you would with the standard post save hook, and the config is the correct way to proceed; sharing this for others who might end up in a similar pinch where they can't get the config edits to go into action.

Resources