How to fix "RuntimeError: CUDA error: out of memory" - pytorch

I have successfully trained in one GPU, but it cant work in multi GPU. I check the code but it just simple set some val in map, then carry out multi GPU training like torch.distributed.barrier.
I set the following code but failed even I set the batch size = 1.
docker exec -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -it jy /bin/bash
os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
The use of GPU.
|===============================+======================+======================|
| 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A |
| 24% 26C P8 21W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... On | 00000000:3E:00.0 Off | N/A |
| 25% 27C P8 2W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... On | 00000000:40:00.0 Off | N/A |
| 25% 25C P8 20W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... On | 00000000:41:00.0 Off | N/A |
| 26% 25C P8 15W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
The error informations.
/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
| distributed init (rank 2): env://
| distributed init (rank 1): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
Traceback (most recent call last):
File "main_track.py", line 398, in <module>
main(args)
File "main_track.py", line 159, in main
utils.init_distributed_mode(args)
File "/jy/TransTrack/util/misc.py", line 459, in init_distributed_mode
torch.distributed.barrier()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 355 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 357 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 358 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 356) of binary: /root/anaconda3/envs/pytorch171/bin/python3
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main_track.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-10-13_08:54:25
host : 2f923a848f88
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 356)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

Related

Why program execution time differs running the same program multiple times?

Consider a nodejs cpu-bound program that generate primes numbers:
// generatePrimes.js
// long running / CPU-bound calculation
function generatePrimes(start, range) {
const primes = []
let isPrime = true
let end = start + range
for (let i = start; i < end; i++) {
for (let j = start; j < Math.sqrt(end); j++) {
if (i !== j && i%j === 0) {
isPrime = false
break
}
}
if (isPrime) {
primes.push(i)
}
isPrime = true
}
return primes
}
function main() {
const min = 2
const max = 1e7
console.log( generatePrimes(min, max) )
}
if (require.main === module)
main()
module.exports = { generatePrimes }
My HW/OS configuration:
Laptop with Linux Ubuntu 20.04.2 LTS desktop environment, with 8 cores:
$ inxi -C -M
Machine: Type: Laptop System: HP product: HP Laptop 17-by1xxx v: Type1ProductConfigId serial: <superuser/root required>
Mobo: HP model: 8531 v: 17.16 serial: <superuser/root required> UEFI: Insyde v: F.32 date: 12/14/2018
CPU: Topology: Quad Core model: Intel Core i7-8565U bits: 64 type: MT MCP L2 cache: 8192 KiB Speed: 700 MHz min/max: 400/4600 MHz Core speeds (MHz): 1: 700 2: 700 3: 700 4: 700 5: 700 6: 700 7: 700 8: 700
$ echo "CPU threads: $(grep -c processor /proc/cpuinfo)"
CPU threads: 8
Now, let measure the elapsed time:
$ /usr/bin/time -f "%e" node generatePrimes.js
[
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37,
41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89,
97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151,
157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223,
227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281,
283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359,
367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433,
439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503,
509, 521, 523, 541,
... 664479 more items
]
7.99
OK, if i run once the program, the elapsed time is ~8 seconds.
But, consider the bash script testGeneratePrimes.sh to measure elapsed times, running the same program in sequence or in parallel 6 times:
#!/bin/bash
# Get number of runs, as command line argument
if [ $# -eq 0 ]
then
echo
echo -e " run n processes, in sequence and in parallel."
echo
echo -e " usage:"
echo -e " $0 <number of runs>"
echo
echo -e " examples:"
echo -e " $0 6"
echo -e " run 6 times"
echo
exit 1
fi
numRuns=$1
# run a single instance of the process 'node generatePrimes'
runProcess() {
/usr/bin/time -f "%e" node generatePrimes.js > /dev/null
}
echo
echo "SEQUENCE TEST: running generatePrimes, $numRuns successive sequential times"
echo
for i in $(seq $numRuns); do
runProcess
done
echo
echo "PARALLEL TEST: running generatePrimes, $numRuns in parallel (background processes)"
echo
for i in $(seq $numRuns); do
runProcess &
done
wait < <(jobs -p)
Running the script (6 processes):
$ ./testGeneratePrimes.sh 6
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.16
9.09
11.44
11.57
12.93
12.00
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
25.99
26.16
30.51
30.64
31.60
31.60
I see that:
in sequence test, elapsed times increase for each run, from ~8 second to ~12 seconds ?!
in the "parallel" tests, elapsed increase from ~25 seconds to ~31 seconds ?!
Thats's insane.
I do not understand why! Maybe it's a linux scheduler limitation? A CPU hardware limitation/issue?
I also tried commands:
nice -20 /usr/bin/time -f "%e" node generatePrimes.js or
taskset -c 0-7 /usr/bin/time -f "%e" node generatePrimes.js
but without any significant difference in the described behavior.
Questions:
Why elapsed times varying so much?
There is any way to config the linux behaviour to do not limit per-process cpu usage?
BTW, related question: running multiple nodejs worker threads: why of such a large overhead/latency?
UPDATE (MORE TESTS)
Following Nate Eldredge suggestion (see comments), here some info/sets using cpupower and sensors:
$ sudo cpupower -c all info
analyzing CPU 0:
perf-bias: 6
analyzing CPU 1:
perf-bias: 6
analyzing CPU 2:
perf-bias: 6
analyzing CPU 3:
perf-bias: 6
analyzing CPU 4:
perf-bias: 6
analyzing CPU 5:
perf-bias: 6
analyzing CPU 6:
perf-bias: 6
analyzing CPU 7:
perf-bias: 6
# set perf-bias to max performance
$ sudo cpupower set -b 0
$ sudo cpupower -c all info -b
analyzing CPU 0:
perf-bias: 0
analyzing CPU 1:
perf-bias: 0
analyzing CPU 2:
perf-bias: 0
analyzing CPU 3:
perf-bias: 0
analyzing CPU 4:
perf-bias: 0
analyzing CPU 5:
perf-bias: 0
analyzing CPU 6:
perf-bias: 0
analyzing CPU 7:
perf-bias: 0
$ sudo cpupower monitor
| Nehalem || Mperf || Idle_Stats
CPU| C3 | C6 | PC3 | PC6 || C0 | Cx | Freq || POLL | C1 | C1E | C3 | C6 | C7s | C8 | C9 | C10
0| 0,04| 2,53| 0,00| 0,00|| 4,23| 95,77| 688|| 0,00| 0,00| 0,08| 0,10| 2,38| 0,00| 24,65| 0,78| 67,90
4| 0,04| 2,53| 0,00| 0,00|| 3,68| 96,32| 675|| 0,00| 0,00| 0,02| 0,06| 0,76| 0,00| 10,83| 0,03| 84,75
1| 0,03| 1,94| 0,00| 0,00|| 6,39| 93,61| 656|| 0,00| 0,00| 0,04| 0,06| 1,88| 0,00| 16,19| 0,00| 75,63
5| 0,04| 1,94| 0,00| 0,00|| 1,35| 98,65| 689|| 0,00| 0,02| 1,19| 0,04| 0,40| 0,33| 3,89| 0,82| 92,02
2| 0,56| 25,49| 0,00| 0,00|| 12,88| 87,12| 673|| 0,00| 0,00| 0,84| 0,74| 28,61| 0,03| 34,48| 3,44| 19,81
6| 0,56| 25,48| 0,00| 0,00|| 4,30| 95,70| 676|| 0,00| 0,00| 0,03| 0,09| 1,48| 0,00| 22,66| 1,11| 70,52
3| 0,19| 3,61| 0,00| 0,00|| 3,67| 96,33| 658|| 0,00| 0,00| 0,02| 0,07| 1,36| 0,00| 14,85| 0,03| 80,16
7| 0,19| 3,60| 0,00| 0,00|| 6,21| 93,79| 679|| 0,00| 0,00| 0,28| 0,19| 3,48| 0,76| 31,10| 1,50| 56,75
$ sudo cpupower monitor ./testGeneratePrimes.sh 6
[sudo] password for giorgio:
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.18
9.06
11.66
11.29
11.30
11.21
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
20.83
20.95
28.42
28.42
28.47
28.52
./testGeneratePrimes.sh took 91,26958 seconds and exited with status 0
| Nehalem || Mperf || Idle_Stats
CPU| C3 | C6 | PC3 | PC6 || C0 | Cx | Freq || POLL | C1 | C1E | C3 | C6 | C7s | C8 | C9 | C10
0| 0,20| 1,98| 0,00| 0,00|| 33,88| 66,12| 2008|| 0,00| 0,04| 0,18| 0,18| 1,83| 0,02| 16,67| 0,22| 47,04
4| 0,20| 1,98| 0,00| 0,00|| 10,46| 89,54| 1787|| 0,00| 0,09| 0,32| 0,37| 4,01| 0,03| 25,21| 0,19| 59,37
1| 0,26| 2,40| 0,00| 0,00|| 24,52| 75,48| 1669|| 0,00| 0,06| 0,18| 0,20| 2,17| 0,00| 14,90| 0,17| 57,84
5| 0,26| 2,40| 0,00| 0,00|| 32,07| 67,93| 1662|| 0,00| 0,07| 0,19| 0,14| 1,40| 0,02| 9,31| 0,53| 56,33
2| 0,93| 13,33| 0,00| 0,00|| 31,31| 68,69| 2025|| 0,00| 0,05| 0,43| 1,00| 18,21| 0,01| 26,18| 1,74| 21,23
6| 0,93| 13,33| 0,00| 0,00|| 11,98| 88,02| 1711|| 0,00| 0,19| 0,31| 0,22| 2,63| 0,03| 18,87| 0,76| 65,04
3| 0,15| 0,98| 0,00| 0,00|| 47,38| 52,62| 2627|| 0,00| 0,07| 0,17| 0,13| 1,35| 0,01| 7,88| 0,10| 42,80
7| 0,15| 0,98| 0,00| 0,00|| 59,25| 40,75| 2235|| 0,00| 0,06| 0,18| 0,18| 1,58| 0,00| 9,31| 0,63| 28,91
$ sensors && testGeneratePrimes.sh 6 && sensors
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +46.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +45.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +43.0°C (high = +100.0°C, crit = +100.0°C)
Core 2: +45.0°C (high = +100.0°C, crit = +100.0°C)
Core 3: +44.0°C (high = +100.0°C, crit = +100.0°C)
BAT0-acpi-0
Adapter: ACPI interface
in0: 12.89 V
curr1: 0.00 A
amdgpu-pci-0100
Adapter: PCI adapter
vddgfx: 65.49 V
edge: +511.0°C (crit = +104000.0°C, hyst = -273.1°C)
power1: 1.07 kW (cap = 30.00 W)
pch_cannonlake-virtual-0
Adapter: Virtual device
temp1: +42.0°C
acpitz-acpi-0
Adapter: ACPI interface
temp1: +47.0°C (crit = +120.0°C)
temp2: +53.0°C (crit = +127.0°C)
SEQUENCE TEST: running generatePrimes, 6 successive sequential times
8.36
9.76
11.35
11.38
11.22
11.24
PARALLEL TEST: running generatePrimes, 6 in parallel (background processes)
21.06
21.14
28.50
28.55
28.62
28.65
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +54.0°C (high = +100.0°C, crit = +100.0°C)
Core 0: +51.0°C (high = +100.0°C, crit = +100.0°C)
Core 1: +50.0°C (high = +100.0°C, crit = +100.0°C)
Core 2: +54.0°C (high = +100.0°C, crit = +100.0°C)
Core 3: +50.0°C (high = +100.0°C, crit = +100.0°C)
BAT0-acpi-0
Adapter: ACPI interface
in0: 12.89 V
curr1: 0.00 A
amdgpu-pci-0100
Adapter: PCI adapter
vddgfx: 65.49 V
edge: +511.0°C (crit = +104000.0°C, hyst = -273.1°C)
power1: 1.07 kW (cap = 30.00 W)
pch_cannonlake-virtual-0
Adapter: Virtual device
temp1: +46.0°C
acpitz-acpi-0
Adapter: ACPI interface
temp1: +55.0°C (crit = +120.0°C)
temp2: +57.0°C (crit = +127.0°C)

Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

filter and update row a particular column of each row based on the multiple column value in the same row python sqlalchemy

Here's a SQL table
|---------------------|------------------|---------------------|------------------|
| ID | StartDate | EndDate | Status |
|---------------------|------------------|---------------------|------------------|
| 0 | 2019-12-19T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 1 | 2019-11-19T10:00 | 2019-11-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 2 | 2019-12-13T10:00 | 2019-12-17T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 3 | 2019-10-19T10:00 | 2019-10-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 4 | 2019-12-24T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
I want to update update the column Status of each row based on the value of same row two column values StartDate and EndDate
Condition would be
if StartDate < current_date and EndDate < current_date
then update that specific column Status value of that particular row as Inactive
if current_date is 2019-12-13T10:00
this should be the resultant output of this operation should be
|---------------------|------------------|---------------------|------------------|
| ID | StartDate | EndDate | Status |
|---------------------|------------------|---------------------|------------------|
| 0 | 2019-12-19T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 1 | 2019-11-19T10:00 | 2019-11-28T10:00 | Inactive |
|---------------------|------------------|---------------------|------------------|
| 2 | 2019-12-13T10:00 | 2019-12-17T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
| 3 | 2019-10-19T10:00 | 2019-10-28T10:00 | Inactive |
|---------------------|------------------|---------------------|------------------|
| 4 | 2019-12-24T10:00 | 2019-12-28T10:00 | Active |
|---------------------|------------------|---------------------|------------------|
I tried
DBSession.query(User).filter(and_(User.c.Status=="Active",User.c.StartDate < current_date, User.c.EndDate < current_date)).update({"Status":"Inactive"})
Even when i try this
from sqlalchemy import and_, func, update
DBSession.query(User).filter(and_(User.c.Status=="Active",func.date(User.c.StartDate) < current_date, func.date(User.c.EndDate) < current_date)).update({"Status":"Inactive"})
source: SQLAlchemy: how to filter date field?
but I get this error
> Traceback (most recent call last): File
> "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2463, in
> __call__
> return self.wsgi_app(environ, start_response) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2449, in
> wsgi_app
> response = self.handle_exception(e) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1866, in
> handle_exception
> reraise(exc_type, exc_value, tb) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\_compat.py", line 39, in
> reraise
> raise value File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 2446, in
> wsgi_app
> response = self.full_dispatch_request() File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1951, in
> full_dispatch_request
> rv = self.handle_user_exception(e) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1820, in
> handle_user_exception
> reraise(exc_type, exc_value, tb) File "C:\Users\Pl\Envs\r\lib\site-packages\flask\_compat.py", line 39, in
> reraise
> raise value File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1949, in
> full_dispatch_request
> rv = self.dispatch_request() File "C:\Users\Pl\Envs\r\lib\site-packages\flask\app.py", line 1935, in
> dispatch_request
> return self.view_functions[rule.endpoint](**req.view_args) File "D:\cs\serverv0.6.py", line 145, in campaign
> DBSession.query(User).filter(and_(User.c.StartDate < current_date, User.c.EndDate < current_date)).update({"status": "Inactive"}) File
> "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\query.py", line
> 3862, in update
> update_op.exec_() File "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\persistence.py",
> line 1692, in exec_
> self._do_pre_synchronize() File "C:\Users\Pl\Envs\r\lib\site-packages\sqlalchemy\orm\persistence.py",
> line 1754, in _do_pre_synchronize
> target_cls = query._mapper_zero().class_ AttributeError: 'NoneType' object has no attribute 'class_'
What is going wrong?
I found this way
active_users = userTableSession.query(userTable).filter(userTable.c.Status == 'Active').all()
active_users_df = pd.DataFrame(active_users)
# targeting the end date
active_users_df['EndDates'] = pd.to_datetime(active_users_df['EndDate'], format = '%Y-%m-%dT%H:%M')
# fetch the current date
current_date_time = datetime.now().strftime('%Y-%m-%dT%H:%M')
# pick out the rows end date have not expired
expired_user = active_users_df.loc[(active_users_df['EndDates'] < current_date_time)]
# fetch the user ID in a list
inactive_user_list = list(expired_user['ID'])
print(inactive_user_list)
# update the table
u = userTable.update().values(Status="Inactive").where(userTable.c.ID.in_(inactive_user_list))
userTableSession.execute(u)
userTableSession.commit()
Please post a more lazy answer than this?

Why is requirements.txt not being installed on deployment to AppEngine?

I'm attempting to upgrade an existing project to the new Python 3 AppEngine Standard Environment. I'm able to deploy my application code, however the app is crashing because it can not find dependencies that are defined in the requirements.txt file. The app file structure looks likes this:
|____requirements.txt
|____dispatch.yaml
|____dashboard
| |____dashboard.yaml
| |____static
| | |____gen
| | | |____favicon.ico
| | | |____fonts
| | | | |____MaterialIcons-Regular.012cf6a1.woff
| | | |____app.js
| | |____img
| | | |____avatar-06.png
| | | |____avatar-07.png
| | | |____avatar-05.png
| | | |____avatar-04.png
| |____templates
| | |____gen
| | | |____index.html
| |____main.py
| |____.gcloudignore
|____.gcloudignore
And the requirements.txt file looks like this:
Flask==0.12.2
pyjwt==1.6.1
flask-cors==3.0.3
requests==2.19.1
google-auth==1.5.1
pillow==5.3.0
grpcio-tools==1.16.1
google-cloud-storage==1.13.0
google-cloud-firestore==0.30.0
requests-toolbelt==0.8.0
Werkzeug<0.13.0,>=0.12.0
firestore-model>=0.0.2
After deploying, when I visit the site on the web, I get a 502. The GCP Console Error Reporting service indicates the error is thrown from a line in main.py where it attempts to import one of the above dependencies: ModuleNotFoundError: No module named 'google'
I've tried moving the requirements.txt into the dashboard folder and get the same result.
Stack Trace:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process
self.load_wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
self.wsgi = self.app.wsgi()
File "/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
return self.load_wsgiapp()
File "/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
return util.import_app(self.app_uri)
File "/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app
__import__(module)
File "/srv/main.py", line 12, in <module>
from google.cloud import storage
ModuleNotFoundError: No module named 'google'
A few things could be going wrong. Make sure that:
Your requirements.txt file is in the same directory as your main.py file
Your .gcloudignore is not ignoring your requirements.txt file
You are deploying the function this same directory as requirements.txt and main.py

When I used the backup tool cbbackup of couchbase,i meet a exception in thread w2

when i use the couchbase cbbackup to backup my data,i meet a question like this .
E:\couchbase\bin>cbbackup -m diff http://localhost:8091 E:\couchbase_backup -
u Administrator -p password
[ ] 0.0% (0/estimated 7303 msgs)
[enter image description here][1]bucket: beer-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
.
bucket: bucket-beer-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[ ] 0.0% (0/estimated 586 msgs)
bucket: bucket-gamesim-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[ ] 0.0% (0/estimated 31369 msgs)
bucket: bucket-travel-sample, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
[####################] 100.0% (586/estimated 586 msgs)
bucket: gamesim-sample, msgs transferred...
: total | last | per sec
byte : 94693 | 94693 | 20342.2
Exception in thread w1:
Traceback (most recent call last):
File "threading.pyc", line 551, in __bootstrap_inner
File "threading.pyc", line 504, in run
File "pump.pyc", line 302, in run_worker
File "pump.pyc", line 360, in run
File "pump_dcp.pyc", line 180, in provide_batch
File "pump_dcp.pyc", line 508, in get_dcp_conn
File "pump_dcp.pyc", line 629, in setup_dcp_streams
File "pump_dcp.pyc", line 649, in request_dcp_stream
File "cb_bin_client.pyc", line 65, in _sendMsg
error: [Errno 10053]
I don't know how to resloving it. I stuck it in two days ,please help me. thanks.
By the way.It doesn't happen two days ago. The exception sometimes is "Exception in thread w2".
https://i.stack.imgur.com/HzXTl.png

Resources