Packer vpshere-iso stuck on wainting for Ip - linux

I am attempting to build a packer template using the following configuration for my packer JSON config file. However, I get packer stuck on waiting for an IP... and it doesn't make any progress.
Thanks for your help in advance.
{
"builders": [
{
"CPUs": 1,
"RAM": 1024,
"RAM_reserve_all": false,
"boot_command": [
"<esc><wait>",
"linux ks=hd:fd0:packer/http/ks.cfg<enter>"
],
"cluster": "{{user `vsphere-cluster`}}",
"convert_to_template": true,
"datacenter": "{{user `vsphere-datacenter`}}",
"datastore": "{{user `vsphere-datastore`}}",
"disk_controller_type": "pvscsi",
"floppy_files": [
"packer/http/ks.cfg"
],
"folder": "{{user `folder`}}",
"guest_os_type": "rhel7_64Guest",
"insecure_connection": "true",
"iso_paths": [
"{{user `iso_url`}}"
],
"network_adapters": [
{
"network": "{{user `vsphere-network`}}",
"network_card": "vmxnet3"
}
],
"notes": "Build via Packer",
"password": "{{user `vsphere-password`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_username": "{{user `ssh_user`}}",
"storage": [
{
"disk_size": 25000,
"disk_thin_provisioned": true
}
],
"type": "vsphere-iso",
"username": "{{user `vsphere-user`}}",
"vcenter_server": "{{user `vsphere-server`}}",
"vm_name": "{{user `vm_name`}}"
}
],
"provisioners": [
{
"execute_command": "sudo {{.Vars}} sh {{.Path}}",
"scripts": [
"packer/scripts/cleanup.sh"
],
"type": "shell"
}
],
"variables": {
"folder": "Templates",
"iso_url": "[datastore001] path/toiso/rhel-server-7.9-x86_64-dvd.iso",
"ssh_pass": "password",
"ssh_user": "root",
"vm-cpu-num": "1",
"vm-disk-size": "25600",
"vm-mem-size": "1024",
"vm_name": "packer-template-test",
"vsphere-cluster": "cluster",
"vsphere-datacenter": "datacenter",
"vsphere-datastore": "datastore",
"vsphere-network": "vsphere-network",
"vsphere-password": "password2",
"vsphere-server": "x.x.x.x",
"vsphere-user": "admin"
}
}
My Kickstart file looks like the following...
lang en_US
keyboard us
timezone America/New_York --isUtc
rootpw --iscrypted (*&YTHGRFDER^&VGGUYIUYIUY*&^&^&^UIYIYI
#Network
network --bootproto=static --ip=10.0.023 --netmask=255.255.255.0 --gateway=10.0.0.1 --nameserver=10.0.0.25 --device=ens192
#platform x86_64
reboot
text
nfs --server=x.x.x.x --dir=/path/to/iso/rhel-server-7.9-x86_64-dvd.iso
bootloader --location=mbr --append="rhgb quiet crashkernel=auto"
zerombr
clearpart --all --initlabel
volgroup System --pesize=4000 pv.0
part pv.0 --fstype=lvmpv --ondisk=sda --size=120000
part /boot --fstype=xfs --ondisk=sda2 --size=500
part /boot/efi --fstype=vfat --ondisk=sda1 --size=500
logvol swap --vgname=System --name=swap --fstype=swap --size=4000
logvol /home --vgname=System --name=home --fstype=xfs --size=500
logvol / --vgname=System --name=root --fstype=xfs --size=44000
auth --passalgo=sha512 --useshadow
selinux --disabled
firewall --enabled
firstboot --disable
%packages
#^graphical-server-environment
%end
This is my packer output after it times out after 30 minutes 23 seconds.
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Adding configuration parameters...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: packer/http/ks.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from floppy_dirs : []
vsphere-iso: Done copying paths from floppy_dirs
vsphere-iso: Copying files from floppy_content
vsphere-iso: Done copying files from floppy_content
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order temporary...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
==> vsphere-iso: Timeout waiting for IP.
==> vsphere-iso: Clear boot order...
==> vsphere-iso: Power off VM...
==> vsphere-iso: Deleting Floppy image ...
==> vsphere-iso: Destroying VM...
Build 'vsphere-iso' errored after 30 minutes 23 seconds: Timeout waiting for IP.
==> Wait completed after 30 minutes 23 seconds
==> Some builds didn't complete successfully and had errors:
--> vsphere-iso: Timeout waiting for IP.
==> Builds finished but no artifacts were created.

Related

Log all failed attempts in testcafe quarantine mode?

I have quarantine mode enabled in my testcafe configuration.
"ci-e2e": {
"browsers": [
"chrome:headless"
],
"debugOnFail": false,
"src": "./tests/e2e/*.test.ts",
"concurrency": 1,
"quarantineMode": true,
"reporters": [
{
"name": "nunit3",
"output": "results/e2e/testResults.xml"
},
{
"name": "spec"
}
],
"screenshots": {
"takeOnFails": true,
"path": "results/ui/screenshots",
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Screenshot-${QUARANTINE_ATTEMPT}.png"
},
"video": {
"path": "results/ui/video",
"failedOnly": true,
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Video-${QUARANTINE_ATTEMPT}"
}
},
Now when some attempt fails I have entry in log (nunit xml logfile) with information about failed runs and only one stack-trace. I have screenshot for each failed run.
<failure>
<message>
<![CDATA[ ❌ AssertionError: ... Run 1: Failed Run 2: Failed Run 3: Failed ]]>
</message>
<stack-trace>
here we have stack-trace for only one failed run
</stack-trace>
</failure>
I want to have log entry with stack-trace for each failed run for each failed test. Is it possible to configure testcafe this way? If not what I need to do?
There is a mistake in the config file. The name of the option for reporters should be reporter, but it is reporterS. It means that Testcafe doesn't use these reporters at all and maybe now you just see an outdated file with results.

Ansible dnf module returns success even when dnf fails

This is the initial state of my system:
I have myrpm-2.0 installed.
This rpm creates just a file.
/usr/share/myfile exists as a file.
/root# rpm -q myrpm
myrpm-2.0-0.x86_64
/root# rpm -ql myrpm-2.0-0.x86_64
/usr/share
/usr/share/myfile
/root# ls -lrt /usr/share | grep myfile
-rw-r--r-- 1 root root 11 May 25 17:32 myfile
/root#
Now I am simulating an error condition. I manually removed the file and created a directory in it's place.
/root# ls -lrt /usr/share | grep myfile
drwxr-xr-x 2 root root 4096 May 25 18:33 myfile
/root#
I prepared a higher version 3.0 of the same rpm, which copies the same file.
/root# rpm -ql /root/update/myrpm-3.0-0.x86_64.rpm
/usr/share
/usr/share/myfile
/root#
4-Test-1: Trying rpm upgrade using ansible's builtin dnf module.
Following is my playbook:
/root# cat test.yaml
---
- hosts: localhost
tasks:
- name: update rpm
dnf:
name: "myrpm"
state: latest
/root#
Executing this playbook gives return code as 0.
/root# ansible-playbook -vvv test.yaml
...
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"allow_downgrade": false,
"autoremove": false,
"bugfix": false,
"conf_file": null,
"disable_excludes": null,
"disable_gpg_check": false,
"disable_plugin": [],
"disablerepo": [],
"download_dir": null,
"download_only": false,
"enable_plugin": [],
"enablerepo": [],
"exclude": [],
"install_repoquery": true,
"install_weak_deps": true,
"installroot": "/",
"list": null,
"lock_timeout": 30,
"name": [
"myrpm"
],
"releasever": null,
"security": false,
"skip_broken": false,
"state": "latest",
"update_cache": false,
"update_only": false,
"validate_certs": true
}
},
"msg": "",
"rc": 0,
"results": [
"Installed: myrpm-3.0-0.x86_64",
"Removed: myrpm-2.0-0.x86_64"
]
}
META: ran handlers
META: ran handlers
PLAY RECAP *******************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/root# echo $?
0
/root#
rc is 0 as per the verbose log.
However, I can see that dnf command has failed.
/root# dnf history | head -n 3
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
27 | | 2022-05-25 18:39 | Upgrade | 1 EE
/root# dnf history info 27 | tail -6
Packages Altered:
** Upgrade myrpm-3.0-0.x86_64 #My_Update
Upgraded myrpm-2.0-0.x86_64 ##System
Scriptlet output:
1 error: unpacking of archive failed on file /usr/share/myfile: cpio: File from package already exists as a directory in system
2 error: myrpm-3.0-0.x86_64: install failed
/root#
4-Test-2: I made my system in exact same state as 3, and used direct dnf command instead of playbook.
/root# dnf update myrpm
...
Preparing : 1/1
Upgrading : myrpm-3.0-0.x86_64 1/2
Error unpacking rpm package myrpm-3.0-0.x86_64
error: unpacking of archive failed on file /usr/share/myfile: cpio: File from package already exists as a directory in system
Cleanup : myrpm-2.0-0.x86_64 2/2
error: myrpm-3.0-0.x86_64: install failed
Verifying : myrpm-3.0-0.x86_64 1/2
Verifying : myrpm-2.0-0.x86_64 2/2
Failed:
myrpm-3.0-0.x86_64
Error: Transaction failed
/root# echo $?
1
/root#
Any clue on why the playbook did not show task as failed?
Got to know that this is a bug fixed in higher versions of ansible.
Refer https://github.com/ansible/ansible/issues/77917

Android O porting

I am trying to port android O to my device but encountering issue due to vendor partition not being created (which is mandatory for Android O), as a result the SELinux policy is not being fetched and boot process is being terminated. How can I create a vendor partition to flash the vendor image into.
my device runs on a Qualcomm msm8953 SOC.
below are the boot logs.
309800] nq-nci 5-0028: nqx_probe: probing nqxx failed, check hardware
[ 7.317273] Freeing unused kernel memory: 1196K
[ 7.321143] Freeing alternatives memory: 112K
[ 7.329257] init: init first stage started!
[ 7.332716] init: Using Android DT directory /proc/device-tree/firmware/android/
[ 7.724741] init: bool android::init::FirstStageMount::InitRequiredDevices(): partition(s) not found in /sys, waiting for their uevent(s): vendor
[ 8.878132] of_batterydata_get_best_profile: 2951034_foxda_ef501esp_3000mah_averaged_masterslave_jun6th2017 found
[ 8.894872] FG: fg_batt_profile_init: Battery SOC: 97, V: 4249242uV
[ 8.900528] of_batterydata_get_best_profile: 2951034_foxda_ef501esp_3000mah_averaged_masterslave_jun6th2017 found
[ 8.910371] SMBCHG: smbchg_config_chg_battery_type: Vfloat changed from 4400mV to 4350mV for battery-type 2951034_foxda_ef501esp_3000mah_averaged_masterslave_jun6th2017
[ 17.746065] init: Wait for partitions returned after 10009ms
[ 17.750729] init: bool android::init::FirstStageMount::InitRequiredDevices(): partition(s) not found after polling timeout: vendor
[ 17.768123] init: Failed to mount required partitions early ...
[ 17.773010] init: panic: rebooting to bootloader
[ 17.777611] init: Reboot start, reason: reboot, rebootTarget: bootloader
[ 17.784364] init: android::WriteStringToFile open failed: No such file or directory
[ 17.791947] init: Shutdown timeout: 0 ms
[ 17.795852] init: property_set("persist.vendor.crash.detect", "false") failed: __system_property_add failed
[ 17.805838] init: waitid failed: No child processes
[ 17.810437] init: vold not running, skipping vold shutdown
[ 17.916293] init: powerctl_shutdown_time_ms:138:0
[ 17.919991] init: Reboot ending, jumping to kernel
[ 17.924752] msm_thermal:msm_thermal_update_freq Freq mitigation task is not initialized
[ 17.978348] mdss_fb_release_all: try to close unopened fb 1! from pid:1 name:init
[ 17.984816] mdss_fb_release_all: try to close unopened fb 0! from pid:1 name:init
[ 17.993627] reboot: Restarting system with command 'bootloader'
[ 17.998538] Going down for restart now
[ 18.002831] qcom,qpnp-power-on qpnp-power-on-12: PMIC#SID2: configuring PON for reset
I am encountering this myself and I believe it has to do with a mix of incomplete defconfig and dm-verity in the SoC's dts. Unless you have worked it out by now.
In particular to incomplete defconfig, I had to add options like CONFIG_PINCTRL_MSM8937=y as my android device is technically a 8937. Also, in arch/arm/boot/dts/qcom/msm8937.dtsi I needed to pull the verify flag from fsmgr_flags.
I now have other issues, but I am getting closer to booting.

Multicontainer Docker application failing on deploy

So I have a problem with deploying my application to elastic beanstalk at Amazon. My application is a multi-container Docker application that includes node server and mongoDB inside of it. Somehow the application crashes every time and I get this bizarre error from mongoDB.
Error is as follows:
18-05-28T12:53:02.510+0000 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 3867 processes, 32864 files. Number of processes should be at least 16432 : 0.5 times number of files.
2018-05-28T12:53:02.540+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2018-05-28T12:53:02.541+0000 I NETWORK [initandlisten] waiting for connections on port 27017
2018-05-28T12:53:03.045+0000 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2018-05-28T12:53:03.045+0000 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2018-05-28T12:53:03.045+0000 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2018-05-28T12:53:03.047+0000 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2018-05-28T12:53:03.161+0000 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] now exiting
2018-05-28T12:53:03.161+0000 I CONTROL [signalProcessingThread] shutting down with code:0
This is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"volumes":[
{
"name": "mongo-app",
"host": {
"sourcePath": "/var/app/mongo-app"
}
},
{
"name": "some-api",
"host": {
"sourcePath": "/var/app/some-api"
}
}
],
"containerDefinitions": [
{
"name": "mongo-app",
"image": "mongo:latest",
"memory": 128,
"portMappings": [
{
"hostPort": 27017,
"containerPort": 27017
}
],
"mountPoints": [
{
"sourceVolume": "mongo-app",
"containerPath": "/data/db"
}
]
},
{
"name": "server",
"image": "node:8.11",
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8001
}
],
"links": [
"mongo-app"
],
"mountPoints":[
{
"sourceVolume": "some-api",
"containerPath": "/some-data"
}
]
}
]
}
And this is my Dockerfile:
FROM node:8.11
RUN mkdir -p /api
WORKDIR /api
COPY package.json /api
RUN cd /api && npm install
COPY . /api
EXPOSE 8001
CMD ["node", "api/app.js"]
Any Ideas why the application is crashing and does not deploy? It seems to me that the mongoDB is causing the problem but I cant understand or find the root of the problem.
Thank you in advance!
I spent a while trying to figure this out as well.
The solution: Add a mountpoint for "containerPath": "/data/configdb". Mongo expects to be able to write to both /data/db and /data/configdb.
Also, you might want to bump up "memory": 128 for Mongo to something higher.

How to install a nodejs cms like pencilblue on uberspace

I would like to have the pencilblue nodejs cms with mongodb installed on my uberspace account. Which steps do I have to take?
As I found it hard figuring out how to do it, here is how I finally succeeded. Most of it is relevant for nodeJS installations other than pencilblue as well.
First you need to create an account on uberspace.de.
Open your terminal and login into your uberspace console with ssh:
ssh {account}#{server}.uberspace.de
Enter the password you created with the creation of the account.
Create the service directory:
uberspace-setup-svscan
Create the mongo database:
uberspace-setup-mongodb
Create folder for database data:
mkdir data
cd data
mkdir db
Start db:
mongod --dbpath data/db/
You will get some login data. I suggest you write it down somewhere:
Hostname: localhost
Portnum#: {dbPort}
Username: {account}_mongoadmin
Password: {dbPassword}
To connect to the db via shell you may use:
mongo admin --port {dbPort} -u {account}_mongoadmin -p)
Configure npm:
cat > ~/.npmrc <<__EOF__
prefix = $HOME
umask = 077
__EOF__
Install pencilblue-cli:
npm install pencilblue-cli
Change to html-folder and create a .htaccess file (you could do this with your ftp-client as well):
RewriteEngine On
RewriteRule ^(.*) http://localhost:8080/$1 [P]
Now if you want to use github:
Create a new repository on github.
Open a new terminal window and clone pencilblue cms in a local folder on your machine:
git clone git#github.com:pencilblue/pencilblue.git pencilblue
cd pencilblue
git remote set-url origin git#github.com:{yourGitName}/{yourRepoName}.git
git add .
git commit -m "Initial commit."
Setup ssh on uberspace:
Go back to your uberspace console.
ssh-keygen -t rsa -b 4096 -C "{yourEmailAddress}"
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub
Copy the whole key that is printed out and paste it in github under settings/SSH keys.
Clone the new repo in uberspace console:
git clone git#github.com:{yourGitName}/{yourRepoName}.git cms
cd cms
Create a config.js either with vim config.js or upload it with ftp:
module.exports = {
"siteName": "{yourSiteName}",
"siteRoot": "http://{account}.{server}.uberspace.de/",
"sitePort": {
8080
},
"logging": {
"level": "info"
},
"db": {
"type": "mongo",
"servers": [
"mongodb://{account}_mongoadmin:{dbPassword}#127.0.0.1:{dbPort}/"
],
"name": "admin",
"writeConcern": 1
},
"cache": {
"fake": false,
"host": "localhost",
"port": 6379
},
"settings": {
"use_memory": false,
"use_cache": false
},
"templates": {
"use_memory": true,
"use_cache": false
},
"plugins": {
"caching": {
"use_memory": false,
"use_cache": false
}
},
"registry": {
"type": "mongo"
},
"session": {
"storage": "mongo"
},
"media": {
"provider": "mongo",
"max_upload_size": 6291456
},
"cluster": {
"workers": 1,
"self_managed": true
},
"siteIP": "0.0.0.0"
};
Install node_modules:
npm install
Create a service that starts the server:
uberspace-setup-service pbservice node ~/cms/pencilblue.js
Start the service:
svc -u ~/service/pbservice
Now you can go to the page on http://{account}.{server}.uberspace.de
(To start the service (hint: u = up):
svc -u ~/service/pbservice
To stop the service (hint: d = down):
svc -d ~/service/pbservice
To reload the service (hint: h = HUP):
svc -h ~/service/pbservice
To restart the service (hint: du = down, up):
svc -du ~/service/pbservice
To remove the service:
cd ~/service/pbservice
rm ~/service/pbservice
svc -dx . log
rm -rf ~/etc/run-pbservice)

Resources