Unpacking a ubi image with an ubifs image inside - linux

I ran into a problem during my research.
I have a firmware file that is downloaded of the internet and im trying to unpack it to emulate the firmware.
Good news is that i did it successfully once but i reverted my machine and i cant recreate the process now.
First of all the file cant be extracted by any tools because you will get an error that less than 2 layout blocks are found.
After that i dumped some info of the ubi file :
==> app_ubifs <==
1 named volumes found, 2 physical volumes, blocksize=0x20000
== volume b'bakfs' ==
-rw-r--r-- 1 0 0 37879808 2020-04-22 01:27:47 ubifs
So from the time i got this to succeed i know that in the volume bakfs there is another ubifs image inside that can successfully be extracted by public tools.
I have tested a lot of ways to mount this image but it always fails at mounting.
modprobe ubi
modprobe nandsim first_id_byte=0x20 second_id_byte=0xaa \
third_id_byte=0x00 fourth_id_byte=0x15
i believe this is the right config for blocksize=0x20000.
ubiformat /dev/mtd0 -f app_ubifs
ubiformat: mtd0 (nand), size 268435456 bytes (256.0 MiB), 2048 eraseblocks of 131072 bytes (128.0 KiB), min. I/O size 2048 bytes
libscan: scanning eraseblock 2047 -- 100 % complete
ubiformat: 2048 eraseblocks have valid erase counter, mean value is 0
ubiformat: flashing eraseblock 282 -- 100 % complete
ubiformat: formatting eraseblock 2047 -- 100 % complete
Also formatting and flashing works fine.
After this the next part i really don't understand.
There are 100 different ways online and i cant seem to get it to work.
I would appreciate it if someone could help me in the process.
As i said i already have the unpacked version with the filesystem.
But i cant recreate the unpacking process now.
So i know its possible.

---- solution
modprobe nandsim first_id_byte=0x2c second_id_byte=0xac third_id_byte=0x90 fourth_id_byte=0x15
Make the device for blocksize=0x20000.
Check if it is set-up.
cat /proc/mtd
lets clean it.
flash_erase /dev/mtd0 0 0
Now format and flash the image.
ubiformat /dev/mtd0 -f image.ubi -O 2048
Then attach the device.
modprobe ubi
ubiattach -p /dev/mtd0 -O 2048
And now i can mount it.
mount -t ubifs /dev/ubi0_X /mnt/ubifs
In my case it was ubi0_1 make sure to check this at /dev.

Another quick alternative to access the files inside the image, in case nandsim module is not available for the current kernel (for a debian based-OS):
apt install liblzo2-dev
pip install python-lzo ubi_reader
Then in the same folder where the ubifs image is localted, executeubireader_extract_files ubifs.img and there you go:
├── ubifs.img
└── ubifs-root
└── 705623055
└── rootfs
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── linuxrc -> /bin/busybox
├── media
├── mnt
├── opt
├── proc
├── sbin
├── Settings
├── srv
├── sys
├── tmp -> /var/tmp
├── usr
├── var
└── www

Related

Upload local file to azure static web storage container $web using azcopy and terraform local-exec provisioner

I have been struggling with uploading a bunch of css/html/js files to a static website hosted on a storage container $web using terraform. It fails even with a single index.html throwing below error.
Error: local-exec provisioner error
│
│ with null_resource.frontend_files,
│ on c08-02-website-storage-account.tf line 111, in resource "null_resource" "frontend_files":
│ 111: provisioner "local-exec" {
│
│ Error running command '
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://***********.blob.core.windows.net/web?sv=2018-11-09&sr=c&st=2022-01-01T00%3A00%3A00Z&se=2023-01-01T00%3A00%3A00Z&sp=racwl&spr=https&sig=*******************" --recursive
': exit status 1. Output: INFO: Scanning...
│ INFO: Any empty folders will not be processed, because source and/or
│ destination doesn't have full folder support
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 has started
│ Log file is located at:
│ /home/runner/.azcopy/718f9960-b7eb-7843-648a-6b57d14f5e27.log
│
│
100.0 %, 0 Done, 0 Failed, 0 Pending, 0 Skipped, 0 Total,
│
│
│ Job 718f9960-b7eb-7843-648a-6b57d14f5e27 summary
│ Elapsed Time (Minutes): 0.0336
│ Number of File Transfers: 1
│ Number of Folder Property Transfers: 0
│ Total Number of Transfers: 1
│ Number of Transfers Completed: 0
│ Number of Transfers Failed: 1
│ Number of Transfers Skipped: 0
│ TotalBytesTransferred: 0
│ Final Job Status: Failed
│
The $web container is empty. So I placed a dummy index.html file before I executed the code to see if that would make this "empty folder" error go away. But still no luck.
I gave the complete set of permissions to SAS key to rule out any access issue.
I suspect the azcopy commmand is unable to navigate to the source folder and get the contents to be uploaded. I am not sure though.
Excerpts from tf file:
resource "null_resource" "frontend_files"{
depends_on = [data.azurerm_storage_account_blob_container_sas.website_blob_container_sas,
azurerm_storage_account.resume_static_storage]
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOT
azcopy cp --from-to=LocalBlob "../../code/frontend/index.html" "https://${azurerm_storage_account.resume_static_storage.name}.blob.core.windows.net/web${data.azurerm_storage_account_blob_container_sas.website_blob_container_sas.sas}" --recursive
EOT
}
}
Any help would be appreciated.
Per a solution listed here, we need to add an escape character (\) before $web. Following command (to copy all files and subfolders to web container) worked for me:
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/\$web/?<SAS token>" --recursive
Without the escape character, it was failing with error: "failed to perform copy command due to error: cannot transfer individual files/folders to the root of a service. Add a container or directory to the destination URL"
azcopy copy "<local_folder>/*" "https://******.blob.core.windows.net/$web/?<SAS token>" --recursive

How to install elasticsearch in custom path?

I need to install Elasticsearch to path /opt/elasticsearch/ with all information in this path.
I mean, I need config path put in opt too.
https://www.elastic.co/guide/en/elasticsearch/reference/7.15/settings.html it says here I can set ES_PATH_CONF and ES_HOME env vars to change installation and config paths, but it doesn't work.
rpm --install elasticsearch-7.15.0-x86_64.rpm --prefix=/opt/elasticsearch/ it's not what I need and doesn't change config path.
It makes home directory in /opt/elasticsearch, and I get next structure and paths doesn't change. It still needs execution bins in /usr/share/elasticsearch/bin/
el6:~ # tree /opt/elasticsearch/ -d -L 3
/opt/elasticsearch/
├── lib
│   ├── sysctl.d
│   ├── systemd
│   │   └── system
│   └── tmpfiles.d
└── share
└── elasticsearch
├── bin
├── jdk
├── lib
├── modules
└── plugins
but i need
el5:~ # tree /opt/elasticsearch/ -d -L 1
/opt/elasticsearch/
├── bin
├── config
├── data
├── jdk
├── lib
├── logs
├── modules
└── plugins
with manually installation
mkdir /opt/elasticsearch/ && tar -xzf elasticsearch-7.15.0-linux-x86_64.tar.gz -C /opt/elasticsearch/ --strip-components 1
I have needed me structure. I made systemd service
[Unit]
Description=Elasticsearch
Documentation=https://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
Type=notify
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/opt/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/config
Environment=PID_DIR=/var/run/elasticsearch
Environment=ES_SD_NOTIFY=true
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/opt/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535
# Specifies the maximum number of processes
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
# Allow a slow startup before the systemd notifier module kicks in to extend the timeout
TimeoutStartSec=5000
[Install]
WantedBy=multi-user.target
But it doesn't start, doesn't crash, and doesn't write any logs in journalctl.
How can I install elasticsearch in opt with configs in it?
You could install elasticsearch to /opt/elasticsearch using your rpmcommand, then move the config files from their default location to your location of choice, and finally change the ES_PATH_CONFand ES_HOME env vars to their respective new path.
When using the "manual" installation method (by downloading the .tar.gz) you have the freedom to put the files where you want. wget returns 404 because the file/URL does not exist. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.0-linux-x86_64.tar.gz should be the correct one. (you're missing -linux)
the only way to do that is to download the tar.gz into your directory and then manually add all the environment variables, and build and manage your own init script

Pm2 logs with huge size using pm2-logrotate

I'm having troubles with pm2.
I'm using a module called pm2-logrotate but the logs have a huge gize like 1.7G and don't respect my configuration which is
== pm2-logrotate ==
┌────────────────┬───────────────┐
│ key │ value │
├────────────────┼───────────────┤
│ compress │ true │
│ rotateInterval │ * * */1 * * * │
│ max_size │ 10M │
│ retain │ 1 │
│ rotateModule │ true │
│ workerInterval │ 30 │
└────────────────┴───────────────┘
So what can I do to pm2 can delete the old logs and dont start crushing my machine with a huge amount of data?
I had this problem too. I think there's currently a bug in pm2-logrotate where the workerInterval option is ignored, and it only rotates according to the rotateInterval option (i.e. once per day by default). And that means that the files can get much bigger than the size you specified with the max_size option. See options here.
I "solved" it by setting the rotateInterval option to every 30 mins instead of the default of once per day. Here's the command:
pm2 set pm2-logrotate:rotateInterval '*/30 * * * *'
The problem with this is that it means your logs will rotate every 30 mins no matter what size they are. Another temporary solution would be to run pm2 flush (which deletes all logs) with crontab. First run crontab -e in your terminal, and then add this line to the file:
*/30 * * * * pm2 flush
You can also flush a specific app with pm2 flush your_app_name if you've got a particular app that produces a lot of logs. If you're not good at remembering how cron timing syntax works (like me), you can use this site.

SSTable multiple directories

I have recently started working on Cassandra everything was well documented and easy to understand so far.
However I am unable to find any answer to the following question:
Why do Cassandra data directory (/var/lib/cassandra/data/ks) have multiple subdirectories for the same SSTable?
At why what point is the new directory is created?
[centos#cs1 2017-06-03--19-46-14 cassandra $] ls -l /var/lib/cassandra/data/ks
total 8
drwxr-xr-x. 3 root root 4096 Jun 3 19:46 events-4f35e2c0482911e79119511599d22fe7
drwxr-xr-x. 3 root root 4096 Jun 3 19:41 events-7a34c34047f411e7aee3b9dc2549db1c
[centos#cs1 2017-06-03--19-46-10 cassandra $] tree
.
├── events-4f35e2c0482911e79119511599d22fe7
│   ├── ks-events-ka-4-CompressionInfo.db
│   ├── ks-events-ka-4-Data.db
│   ├── ks-events-ka-4-Digest.sha1
│   ├── ks-events-ka-4-Filter.db
│   ├── ks-events-ka-4-Index.db
│   ├── ks-events-ka-4-Statistics.db
│   ├── ks-events-ka-4-Summary.db
│   ├── ks-events-ka-4-TOC.txt
│   └── snapshots
└── events-7a34c34047f411e7aee3b9dc2549db1c
└── snapshots
└── 1496472654574-device_log
└── manifest.json
5 directories, 9 files
I noticed that flushing or compacting does not create new directory. It Simply adds/compacts the most recent SSTable directory
When you drop a table, by default Cassandra takes a snapshot to prevent data-loss if it was unintended. In your case, the events-7a34c34047f411e7aee3b9dc2549db1c is the older table and it has only snapshot directory in it.
The Cassandra.yaml parameter responsible for that action is as follows
auto_snapshot (Default: true) Enable or disable whether a snapshot
is taken of the data before keyspace truncation or dropping of tables.
To prevent data loss, using the default setting is strongly advised.
If you set to false, you will lose data on truncation or drop.
Remember to clean up the older table snapshots in production like environments, otherwise it could easily pile up on the data directory size.
If you drop a keyspace (ks as in my case) it does not deletes the keyspace directory (/var/lib/cassandra/data/ks) from the filesystem (auto_snapshot:true). That was the reason I was still seeing old - directory/s.
In cassandra when you drop a table, it's directory is remain in KeySpace directory.
In your case, it seems you created a table with name of the table you dropped before. because of that you have one table with two directory that one of those is useless, and you can rm -rf its directory or run nodetool clearsnapshot.

Arangodb: --database.directory setting not respected when giving instance a name

When I run arangod like this:
arangod --supervisor --daemon --pid-file some_file \
--configuration some_file.conf \
single_instance
then
--database.directory
option is ignored and
/var/tmp/single_instance
directory containing
├── single_instance_db
├── databases
├── journals
├── LOCK
├── rocksdb
├── SERVER
└── SHUTDOWN
is created.
But when I run arangod like this:
arangod --supervisor --daemon --pid-file some_file \
--configuration some_file.conf
then
--database.directory
option is honoured.
Why ?
(Is it some bug ?)
One of the things arangod prints during the startup for me is:
changed working directory for child process to '/var/tmp'
This is done by the supervisor before it forks the child.
Since you gave it the relative directory single_instance, this database directory is created in the then current working directory /var/tmp/ resulting in a total database directory of /var/tmp/single_instance.
If you want a specific directory in that situation, you should specify an absolute path like /var/lib/arangodb/single_instance.

Resources