I'm trying to customise preseed on ubuntu 14.04. Where all the parameter required for installation are stored. At the time of first OS boot value and variables are exported, and configuration is completed without any manual intervention .
During this process for hardware configuration i opted for LVM based architecture.
When i tried with fixed disk space it works fine for below configuration
d-i partman-auto/expert_recipe string \
boot-root :: \
250 1000 300 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
# swap with at least 5GB, maximum size of memory
5120 500 100% linux-swap \
$lvmok{ } \
method{ swap } format{ } \
. \
# /var/db/application with at least 40GB, maximum 40GB, and ext4
40960 1000 40960 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/db/application } \
. \
# / with at least 40GB, maximum 40GB, and ext4
40960 1000 40960 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
# /var/log/application with at least 20GB, maximum rest of space, and ext4
20480 1000 1000000 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/log/application } \
. \
My query is when i tried assigning disk space based on percentage, Since hard disk allocation is different every time. It doesn't work for me. The partman configuration is as below
d-i partman-auto/expert_recipe string \
boot-root :: \
250 1000 300 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
# swap with at least 5GB, maximum size of memory
5120 500 100% linux-swap \
$lvmok{ } \
method{ swap } format{ } \
. \
# /var/db/application with at least 40% of disk provided, maximum 40% of disk provided, and ext4
40% 1000 40% ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/db/application } \
. \
# / with at least 35% of disk provided, maximum 35% of disk provided, and ext4
35% 1000 35% ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
# /var/log/application with at least 20% of disk provided, maximum rest of space, and ext4
20% 1000 1000000 ext4 \
$lvmok{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /var/log/application } \
. \
Any help where i'm going wrong
Related
I am getting below error when trying to attach shell script as config map.
I am not sure what's the issue because script work without adding in config map
It shows error is on the line 58
Which is not even there.
Any help will be really appreciated.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.metadata.name }}-micro
data:
micro-integrator.sh: |
#!/bin/sh
# micro-integrator.sh
while [ "$status" = "$START_EXIT_STATUS" ]
do
$JAVACMD \
-Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
$JVM_MEM_OPTS \
-XX:+HeapDumpOnOutOfMemoryError \
-XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
$JAVA_OPTS \
-Dcom.sun.management.jmxremote \
-classpath "$CARBON_CLASSPATH" \
-Djava.io.tmpdir="$CARBON_HOME/tmp" \
-Dcatalina.base="$CARBON_HOME/wso2/lib/tomcat" \
-Dwso2.server.standalone=true \
-Dcarbon.registry.root=/ \
-Djava.command="$JAVACMD" \
-Dqpid.conf="/conf/advanced/" \
$JAVA_VER_BASED_OPTS \
-Dcarbon.home="$CARBON_HOME" \
-Dlogger.server.name="micro-integrator" \
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager \
-Dcarbon.config.dir.path="$CARBON_HOME/conf" \
-Dcarbon.repository.dir.path="$CARBON_HOME/repository" \
-Dcarbon.components.dir.path="$CARBON_HOME/wso2/components" \
-Dcarbon.dropins.dir.path="$CARBON_HOME/dropins" \
-Dcarbon.external.lib.dir.path="$CARBON_HOME/lib" \
-Dcarbon.patches.dir.path="$CARBON_HOME/patches" \
-Dcarbon.internal.lib.dir.path="$CARBON_HOME/wso2/lib" \
-Dcom.atomikos.icatch.hide_init_file_path=true \
-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false \
-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true \
-Dcom.sun.jndi.ldap.connect.pool.authentication=simple \
-Dcom.sun.jndi.ldap.connect.pool.timeout=3000 \
-Dorg.terracotta.quartz.skipUpdateCheck=true \
-Djava.security.egd=file:/dev/./urandom \
-Dfile.encoding=UTF8 \
-Djava.net.preferIPv4Stack=true \
-DNonRegistryMode=true \
-DNonUserCoreMode=true \
-Dcom.ibm.cacheLocalHost=true \
-Dcarbon.use.registry.repo=false \
-DworkerNode=false \
-Dorg.apache.cxf.io.CachedOutputStream.Threshold=104857600 \
-DavoidConfigHashRead=true \
-Dproperties.file.path=default \
-DenableReadinessProbe=true \
-DenableManagementApi=true \
$NODE_PARAMS \
-Dorg.apache.activemq.SERIALIZABLE_PACKAGES="*" \
org.wso2.micro.integrator.bootstrap.Bootstrap $*
status="$?"
done
I want fine tune on squad with huggingface run_squad.py, but meet the following question:
1, when I use "--do_train" without "True" as following code, after 20 minutes runing,there is no models in output_dir:
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--output_dir models/bert/ \
--data_dir data/squad \
--overwrite_output_dir \
--overwrite_cache \
--do_train \
--train_file train-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--do_eval \
--predict_file dev-v2.0.json \
--per_gpu_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--threads 10 \
--save_steps 5000
2, when I use "--do_train=True" as following code, the error message is "run_squad.py: error: argument --do_train: ignored explicit argument 'True'":
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--output_dir models/bert/ \
--data_dir data/squad \
--overwrite_output_dir \
--overwrite_cache \
--do_train=True \
--train_file train-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--do_eval \
--predict_file dev-v2.0.json \
--per_gpu_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--threads 10 \
--save_steps 5000
3, when I use "--do_train True" as following code, the error message is "run_squad.py: error: unrecognized arguments: True":
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--output_dir models/bert/ \
--data_dir data/squad \
--overwrite_output_dir \
--overwrite_cache \
--do_train True \
--train_file train-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--do_eval \
--predict_file dev-v2.0.json \
--per_gpu_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--threads 10 \
--save_steps 5000
I run code in colab with GPU: Tesla P100-PCIE-16GB
Judging by the running time, I think the code didn't through training process, but I don't know how to set parameters in order to let training go.what should I do?
I want to build a static qt application. First of all I build qt static version on my linux machine. In the first time when I builded static application, I received errors because my qt can't link my application with some linux *.so librarys. I fixed it by using
unix{ QMAKE_LFLAGS += -static-libgcc -static-libstdc++ }
and my application now runs on every linux platform, intependetly from local linux libraris, it doesn't linking with local linux .so files. I want to do that think also with my protobuf files. I didn't found in internet how to do that. So I'm asking for help in here. I add pro file below.
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
TARGET = EthernetAntCtrl_app
TEMPLATE = app
ROOT = ./..
DESTDIR = $$ROOT/bin
TMP_PRG_DIR = $$ROOT/tmp
MOC_DIR = $$TMP_PRG_DIR/$$TARGET/moc
OBJECTS_DIR = $$TMP_PRG_DIR/$$TARGET/obj
UI_DIR = $$TMP_PRG_DIR/$$TARGET/ui
RCC_DIR = $$TMP_PRG_DIR/$$TARGET/resources
PROTOBUF_DIR = /usr/local/include/google/protobuf
include($$ROOT/status_monitoring/status_monitoring.pri)
include($$ROOT/qt-propertybrowser/src/qtpropertybrowser.pri)
DEFINES += QT_DEPRECATED_WARNINGS
#-static-libprotobuf
unix{
QMAKE_LFLAGS += -static-libgcc -static-libstdc++
}
SOURCES += \
main.cpp \
mainwindow.cpp \
interconnect.cpp \
switch.cpp \
../Common/MyLog.cpp \
../DevMan/DevmanLibraryController/CmdBase.cpp \
../DevMan/DevmanLibraryController/DevmanLibraryController.cpp \
../DevMan/Common/Names/CommonNames.cpp \
../DevMan/Common/Names/SystemMessageNames.cpp \
../DevMan/DevmanLibraryController/ConnectMsgs/TcpConnectMessage.cpp \
../DevMan/DevmanLibraryController/ConnectMsgs/TcpDisconnectMessage.cpp \
../DevMan/DevmanLibraryController/InitDeviceManagerMsgs/ResultMessage.cpp \
Msgs/InitDeviceManager_AntCtrl.cpp \
textedit.cpp \
pushbutton.cpp \
controller.cpp \
Msgs/CustomNames.cpp \
Msgs/AntCntrlMessage.cpp \
button_style.cpp \
ClientParameterManager.cpp \
../DevMan/DevmanLibraryController/ConnectMsgs/UdpConnectMessage.cpp \
../DevMan/DevmanLibraryController/ConnectMsgs/UdpDisconnectMessage.cpp \
LedButton.cpp
HEADERS += \
switch.h \
mainwindow.h \
interconnect.h \
structureofpackets.h \
../Common/MacroDefines.h \
../Common/MyLog.h \
../DevMan/DevmanLibraryController/CmdBase.h \
../DevMan/DevmanLibraryController/DevmanLibraryController.h \
../DevMan/Common/Names/CommonNames.h \
../DevMan/Common/Names/SystemMessageNames.h \
../DevMan/DevmanLibraryController/ConnectMsgs/TcpConnectMessage.h \
../DevMan/DevmanLibraryController/ConnectMsgs/TcpDisconnectMessage.h \
../DevMan/DevmanLibraryController/InitDeviceManagerMsgs/ResultMessage.h \
Msgs/InitDeviceManager_AntCtrl.h \
textedit.h \
pushbutton.h \
controller.h \
Msgs/CustomNames.h \
Msgs/AntCntrlMessage.h \
button_style.h \
ClientParameterManager.h \
../DevMan/DevmanLibraryController/ConnectMsgs/UdpConnectMessage.h \
../DevMan/DevmanLibraryController/ConnectMsgs/UdpDisconnectMessage.h \
LedButton.h
FORMS += \
mainwindow.ui
defineTest(copyToDestdir) {
files = $$1
#copyToDestdir($$PWD/ClientGuiConfig.ini)
I need some help understanding Linux's `try_cmpxchg semantics and implementation. In the kernel source, it is implemented as:
#define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock) \
({ \
bool success; \
__typeof__(_ptr) _old = (_pold); \
__typeof__(*(_ptr)) __old = *_old; \
__typeof__(*(_ptr)) __new = (_new); \
switch (size) { \
case __X86_CASE_B: \
{ \
volatile u8 *__ptr = (volatile u8 *)(_ptr); \
asm volatile(lock "cmpxchgb %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
[old] "+a" (__old) \
: [new] "q" (__new) \
: "memory"); \
break; \
} \
case __X86_CASE_W: \
{ \
volatile u16 *__ptr = (volatile u16 *)(_ptr); \
asm volatile(lock "cmpxchgw %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
[old] "+a" (__old) \
: [new] "r" (__new) \
: "memory"); \
break; \
} \
case __X86_CASE_L: \
{ \
volatile u32 *__ptr = (volatile u32 *)(_ptr); \
asm volatile(lock "cmpxchgl %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
[old] "+a" (__old) \
: [new] "r" (__new) \
: "memory"); \
break; \
} \
case __X86_CASE_Q: \
{ \
volatile u64 *__ptr = (volatile u64 *)(_ptr); \
asm volatile(lock "cmpxchgq %[new], %[ptr]" \
CC_SET(z) \
: CC_OUT(z) (success), \
[ptr] "+m" (*__ptr), \
[old] "+a" (__old) \
: [new] "r" (__new) \
: "memory"); \
break; \
} \
default: \
__cmpxchg_wrong_size(); \
} \
if (unlikely(!success)) \
*_old = __old; \
likely(success); \
})
#define __try_cmpxchg(ptr, pold, new, size) \
__raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
#define try_cmpxchg(ptr, pold, new) \
__try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
I am curious what those CC_SET and CC_OUT means. They are defined as:
/*
* Macros to generate condition code outputs from inline assembly,
* The output operand must be type "bool".
*/
#ifdef __GCC_ASM_FLAG_OUTPUTS__
# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
# define CC_OUT(c) "=#cc" #c
#else
# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
# define CC_OUT(c) [_cc_ ## c] "=qm"
#endif
Also, it would be great if you can explain the exact semantics of try_cmpxchg (not quite understand how can a atomic cmpxchg fail...)
Newer versions of gcc (I believe from version 6) support specific flag outputs. The macros are there to use this support if available, else fall back to the old way by doing a setCC instruction and a temporary output.
As to how cmpxchg can "fail": it does a compare so it fails if that compare fails, in which case the destination is unchanged and the current value is fetched from memory. Consult an instruction set reference for the details.
I'm having some issues getting my partitions to be of type primary, and not logical/extended.
Here is the relevant code in my preseed:
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-auto-lvm/guided_size string max
d-i partman-auto/choose_recipe select boot-root
d-i partman-auto-lvm/new_vg_name string vg00
d-i partman-auto/expert_recipe string \
boot-root :: \
512 512 512 ext3 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext3 } \
mountpoint{ /boot } \
. \
2048 2048 2048 swap \
$primary{ } $lvmok{ } lv_name{ lv_swap } $defaultignore{} \
method{ swap } format{ } \
. \
1024 10000 -1 ext4 \
$primary{ } $lvmok{ } lv_name{ lv_root } $defaultignore{}\
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
The problem is, this then creates the following partition scheme:
root#ubuntu-server-1404-devit:~# fdisk /dev/sda
Command (m for help): p
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009ac4d
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2000895 999424 83 Linux
/dev/sda2 2002942 20969471 9483265 5 Extended
/dev/sda5 2002944 20969471 9483264 8e Linux LVM
I'd like to remove this unnecessary Extended / logical partition, and just have the Linux LVM partition be on sda2 (primary). Like so:
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2000895 999424 83 Linux
/dev/sda2 2002942 20969471 9483265 8e Linux LVM
Well the documentation for this is pretty embarassingly minimal but I've figured it out (needed to define the volume group name and method type lvm for some reason):
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-auto-lvm/guided_size string max
d-i partman-auto/choose_recipe select boot-root
d-i partman-auto-lvm/new_vg_name string vg00
d-i partman-auto/expert_recipe string \
boot-root :: \
1024 1024 1024 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
100 1000 1000000000 ext4 \
$defaultignore{ } \
$primary{ } \
method{ lvm } \
device{ /dev/sda } \
vg_name{ vg00 } \
. \
2048 2048 2048 swap \
$lvmok{ } lv_name{ lv_swap } in_vg{ vg00 } \
method{ swap } format{ } \
. \
1024 3072 -1 ext4 \
$lvmok{} lv_name{ lv_root } in_vg{ vg00 } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-lvm/confirm_nooverwrite boolean true