Given a history like this in Nushell, how do I delete specific entries; e.g. entries 6, 8, and 10?
nu > history
╭────┬────────────────────╮
│ # │ command │
├────┼────────────────────┤
│ 0 │ history --clear │
│ 1 │ php --version │
│ 2 │ composer --version │
│ 3 │ node --version │
│ 4 │ npm --version │
│ 5 │ composer --version │
│ 6 │ history │
│ 7 │ php --version │
│ 8 │ history │
│ 9 │ php --version │
│ 10 │ history │
│ 11 │ composer --version │
╰────┴────────────────────╯
Based on the code that I can read here and on the documentation here, it appears that an option like this is not currently available.
However, the code indicates that the history.txt file is located at ~/.config/nushell. Utilizing this information, it is possible to accomplish what you asked by using my script below:
nushell_history_manager.py
import os
import sys
def delete_lines(file_path, line_numbers):
# open the file in read mode
with open(file_path, 'r') as f:
# read all the lines and store them in a list
lines = f.readlines()
# open the file in write mode
with open(file_path, 'w') as f:
for i, line in enumerate(lines):
# check if the current line number is not in the list of line numbers to delete
if i+1 not in line_numbers:
# if it's not, write the line to the file
f.write(line)
def print_table(file_path):
# open the file in read mode
with open(file_path, 'r') as f:
# read all the lines and store them in a list
lines = f.readlines()
# print the table header
print("╭──────┬───────────────────────────────╮")
print("│ ## │ command │")
print("├──────┼───────────────────────────────┤")
for i, line in enumerate(lines):
# print each line number and the corresponding command
print(f"│ {i+1:3} │ {line.strip():26} │")
# print the table footer
print("╰──────┴───────────────────────────────╯")
if __name__ == '__main__':
# set the file path to the history.txt file in the nushell config directory
file_path = os.path.expanduser('~/.config/nushell/history.txt')
# print the initial contents of the file in a table format
print_table(file_path)
# ask the user to enter line numbers to delete
line_numbers_str = input("Enter line numbers to delete (separated by commas): ")
# convert the entered line numbers to a list of integers
line_numbers = list(map(int, line_numbers_str.split(',')))
# delete the specified lines from the file
delete_lines(file_path, line_numbers)
# print the updated contents of the file in a table format
print_table(file_path)
Usage:
python nushell_history_manager.py
Runtime:
╭──────┬────────────────────╮
│ ## │ command │
├──────┼────────────────────┤
│ 1 │ history --clear │
│ 2 │ php --version │
│ 3 │ composer --version │
│ 4 │ node --version │
│ 5 │ npm --version │
│ 6 │ composer --version │
│ 7 │ history │
│ 8 │ php --version │
│ 9 │ history │
│ 10 │ php --version │
│ 11 │ history │
│ 12 │ composer --version │
╰──────┴────────────────────╯
Enter line numbers to delete (separated by commas): 7,9,11
╭──────┬────────────────────╮
│ ## │ command │
├──────┼────────────────────┤
│ 1 │ history --clear │
│ 2 │ php --version │
│ 3 │ composer --version │
│ 4 │ node --version │
│ 5 │ npm --version │
│ 6 │ composer --version │
│ 7 │ php --version │
│ 8 │ php --version │
│ 9 │ composer --version │
╰──────┴────────────────────╯
nu's history command does not (yet) provide a functionality to delete items similar to bash's history -d. However, you can query where the history file is located using $nu.history-path, then use drop nth to delete the lines in question.
open $nu.history-path | lines | drop nth 6 8 10 | save -f $nu.history-path
While #pmf response does EXACTLY what you ask, a more generic form is shown below. Here you don't even have to know the lines of the file that are duplicates:
export def dedupeLines [filepath: string = $"($nu.history-path)"] {
open $filepath | lines | uniq | save --force $filepath
}
In your specific case, filepath = $nu.history-path.
Executing the command below will accomplish your request
dedupeLines # since $nu.history-path is the default filepath for the command
Also for the future, Discords nushell has many questions and answers and the fellows there are extremely helpful and prompt in responding to queries. I had actually asked your question there before too.
I don't know if the python solution lines of code could be reduced, but it is interesting what nushell accomplishes in a single line when compared to the python solution.
Related
After executing the given command mc ilm rule ls myminio/receive i get the below output..
┌───────────────────────────────────────────────────────────────────────────────────────┐
│ Expiration for latest version (Expiration) │
├──────────────────────┬─────────┬────────┬──────┬────────────────┬─────────────────────┤
│ ID │ STATUS │ PREFIX │ TAGS │ DAYS TO EXPIRE │ EXPIRE DELETEMARKER │
├──────────────────────┼─────────┼────────┼──────┼────────────────┼─────────────────────┤
│ cf59qti8ufll86j92g │ Enabled │ - │ - │ 90 │ false │
│ cf7701klgt9a0vts40 │ Enabled │ - │ - │ 90 │ false │
│ cf7712tlgie49qo4p0 │ Enabled │ - │ - │ 90 │ false │
From which i want only the list of ID from this output how do i extract it..
My excepted output :
cf59qti8ufll86j92g
cf7701klgt9a0vts40
cf7712tlgie49qo4p0
For example,
command | perl -ne 'print if (s/^\s*\S+\s+([a-z0-9]{18})\s.*$/\1/)'
the use case
I have followed the flask tutorial on those pages: https://flask.palletsprojects.com/en/1.1.x/tutorial/
The project directory is the following :
/src
├── flaskr/
│ ├── __init__.py
│ ├── db.py
│ ├── schema.sql
│ ├── auth.py
│ ├── blog.py
│ ├── templates/
│ │ ├── base.html
│ │ ├── auth/
│ │ │ ├── login.html
│ │ │ └── register.html
│ │ └── blog/
│ │ ├── create.html
│ │ ├── index.html
│ │ └── update.html
│ └── static/
│ └── style.css
├── tests/
│ ├── conftest.py
│ ├── data.sql
│ ├── test_factory.py
│ ├── test_db.py
│ ├── test_auth.py
│ └── test_blog.py
├── venv/
├── setup.py
└── MANIFEST.in
run the web app
The web app works by running the script 'run_app_prod.sh'.
the code works on my local machine (ubuntu 20.04) but not on azure web app for linux
run_app_prod.sh
#/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo current $DIR
python -m pip install -e src
./test_if_db_exists.sh
export FLASK_APP="flaskr:create_app()"
# export FLASK_ENV=development
# flask run --host=0.0.0.0
gunicorn --bind=0.0.0.0 --timeout 600 "flaskr:create_app()"
the flaskr/init.py files
# *-* coding:utf-8 *-*
import os
from flask import Flask
from flaskr.auth import ShowLoginPage, ShowLogoutPage, ShowRegisterPage
from flaskr.sca_tork_easycube_api import ShowActionsPage
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
app.config.from_mapping(
SECRET_KEY='dev',
DATABASE=os.path.join(app.instance_path, 'flaskr.sqlite'),
)
if test_config is None:
# load the instance config, if it exists, when not testing
app.config.from_pyfile('config.py', silent=True)
else:
# load the test config if passed in
app.config.from_mapping(test_config)
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
# a simple page that says hello
#app.route('/hello')
def hello():
return 'Hello, World!'
# https://flask.palletsprojects.com/en/1.1.x/tutorial/database/
from . import db
db.init_app(app)
# https://flask.palletsprojects.com/en/1.1.x/tutorial/views/
# /register /login /logout
from . import auth
app.register_blueprint(auth.bp)
app.add_url_rule('/auth/login', view_func=ShowLoginPage.as_view('auth.login'))
app.add_url_rule('/auth/logout', view_func=ShowLogoutPage.as_view('auth.logout'))
app.add_url_rule('/auth/register', view_func=ShowRegisterPage.as_view('auth.register'))
from . import sca_tork_easycube_api
app.add_url_rule('/sca_tork_easycube_api/actions', view_func=ShowActionsPage.as_view('sca_tork_easycube_api.actions'))
# index.html
from . import index
app.register_blueprint(index.bp)
app.add_url_rule('/', endpoint='index')
return app
the error message
_____
/ _ \ __________ _________ ____
/ /_\ \___ / | \_ __ \_/ __ \
/ | \/ /| | /| | \/\ ___/
\____|__ /_____ \____/ |__| \___ >
\/ \/ \/
A P P S E R V I C E O N L I N U X
Documentation: http://aka.ms/webapp-linux
Python 3.7.5
Note: Any data outside '/home' is not persisted
Starting OpenBSD Secure Shell server: sshd.
Site's appCommandLine: gunicorn --bind=0.0.0.0 --timeout 600 "flaskr:create_app()"
App will launch in debug mode
Launching oryx with: -debugAdapter ptvsd -debugPort 49494 -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -bindPort 8000 -userStartupCommand 'gunicorn --bind=0.0.0.0 --timeout 600 "flaskr:create_app()"'
Oryx Version: 0.2.20200114.13, Commit: 204922f30f8e8d41f5241b8c218425ef89106d1d, ReleaseTagName: 20200114.13
Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
Build Operation ID: |4NuzYKKsZco=.40b8e078_
Writing output script to '/opt/startup/startup.sh'
Found virtual environment .tar.gz archive.
Removing existing virtual environment directory /antenv...
Extracting to directory /antenv...
Using packages from virtual environment antenv located at /antenv.
Updated PYTHONPATH to ':/antenv/lib/python3.7/site-packages'
[42] [INFO] Starting gunicorn 20.0.4
[42] [INFO] Listening at: http://0.0.0.0:8000 (42)
[42] [INFO] Using worker: sync
[45] [INFO] Booting worker with pid: 45
[45] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/opt/python/3.7.5/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/opt/python/3.7.5/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'flaskr'
[45] [INFO] Worker exiting (pid: 45)
[42] [INFO] Shutting down: Master
[42] [INFO] Reason: Worker failed to boot.
I had a similar error without Azure. Analyzing this issue, I suspected it was an importing issue related to Gunicorn.
My solution was to append the missing directory path to the beginning of my entry application script (flaskr/init.py in your case):
import sys
sys.path.append('/src/flaskr')
I don't know if it's a good practice, but it worked for me. Hope it helps for you too.
Frist time I am writing crontab.
Following is structure of time and Date
# * * * * * command to execute
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
# │ │ │ └────────── month (1 - 12)
# │ │ └─────────────── day of month (1 - 31)
# │ └──────────────────── hour (0 - 23)
# └───────────────────────── min (0 - 59)
I put entry like following
00 06 * /bin/sh /opt/cleanup.sh
Think this is not woring.
Where can I see any log of crontab??
Usually the cron commands' output is sent to the owner via mail (man mail), but this when you execute a code that returns an output (stdout and stderr). When you login you should see something like "There are new mails". I don't know though if wrong cron schedules like yours (see #fedorqui's reply) throw an error log. In any case to have the output and error of the scheduled cron jobs on file instead of on mail, you can redirect the output like this:
00 06 * * * /bin/sh /opt/cleanup.sh > /where/you/have/write/permission/cleanup.log 2>&1
If you want to append rather than overwrite, just use two >, like this:
00 06 * * * /bin/sh /opt/cleanup.sh >> /where/you/have/write/permission/cleanup.log 2>&1
To disable the log, simply schedule the following:
00 06 * * * /bin/sh /opt/cleanup.sh > /dev/null 2>&1
2>&1 means "redirect the channel 2 (error) to the pointer of channel 1 (standard output)". The standard output is redirected to the file (/my/file.sh > /my/file.log).
You need to use all the parameters. So to run every day at 6am use:
00 06 * * * /bin/sh /opt/cleanup.sh
You can see the logs in /var/log/cron, probably.
For further information, you can check the beautifully written section Debugging crontab in [crontab] tag wiki.
On my server i have three cron jobs:
by typing crontab -e i get the following:
0 */24 * * * wget -qO /dev/null http://www.example.com/Users/mailNotify?token=1234 >> /var/log/cronLog.txt
0 */23 * * * sh /var/www/backup/backupScript
0 */23 * * * wget -qO /dev/null http://www.example.com/Users/off_score?token=1234 >> /var/log/cronLog.txt
These cronjobs runs twice:
at 00.00 and at 01.00 every night.
The funny thing about this is that it runs all three jobs at each of the above hour.
Can anyone tell me what i have done wrong when creating these?
To have your cronjobs running once at a specific time, you shouldn't use */ as this will make your cronjobs run every 23 hours, which causes the behaviour of running at 1 and then again, 23 hours later, at 0, as cron is calculating when to run to run every 23 hours during one day.
To run all of them at midnight like you commented, use cron like this:
0 0 * * * wget -qO /dev/null http://www.example.com/Users/mailNotify?token=1234 >> /var/log/cronLog.txt
0 0 * * * sh /var/www/backup/backupScript
0 0 * * * wget -qO /dev/null http://www.example.com/Users/off_score?token=1234 >> /var/log/cronLog.txt
Cron definitions:
# * * * * * command to execute
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
# │ │ │ └────────── month (1 - 12)
# │ │ └─────────────── day of month (1 - 31)
# │ └──────────────────── hour (0 - 23)
# └───────────────────────── min (0 - 59)
You are telling cron to run every day with the 3rd * in the command.
Is there any linux command that I can call from a Bash script that will print the directory structure in the form of a tree, e.g.,
folder1
a.txt
b.txt
folder2
folder3
Is this what you're looking for tree? It should be in most distributions (maybe as an optional install).
~> tree -d /proc/self/
/proc/self/
|-- attr
|-- cwd -> /proc
|-- fd
| `-- 3 -> /proc/15589/fd
|-- fdinfo
|-- net
| |-- dev_snmp6
| |-- netfilter
| |-- rpc
| | |-- auth.rpcsec.context
| | |-- auth.rpcsec.init
| | |-- auth.unix.gid
| | |-- auth.unix.ip
| | |-- nfs4.idtoname
| | |-- nfs4.nametoid
| | |-- nfsd.export
| | `-- nfsd.fh
| `-- stat
|-- root -> /
`-- task
`-- 15589
|-- attr
|-- cwd -> /proc
|-- fd
| `-- 3 -> /proc/15589/task/15589/fd
|-- fdinfo
`-- root -> /
27 directories
sample taken from maintainer's web page.
You can add the option -L # where # is replaced by a number, to specify the max recursion depth.
Remove -d to display also files.
You can use this one:
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
It will show a graphical representation of the current sub-directories without files in a few seconds, e.g. in /var/cache/:
.
|-apache2
|---mod_cache_disk
|-apparmor
|-apt
|---archives
|-----partial
|-apt-xapian-index
|---index.1
|-dbconfig-common
|---backups
|-debconf
Source
This command works to display both folders and files.
find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
Example output:
.
|-trace.pcap
|-parent
| |-chdir1
| | |-file1.txt
| |-chdir2
| | |-file2.txt
| | |-file3.sh
|-tmp
| |-json-c-0.11-4.el7_0.x86_64.rpm
Source: Comment from #javasheriff here. Its submerged as a comment and posting it as answer helps users spot it easily.
Since it was a successful comment, I am adding it as an answer:
To print the directory structure in the form of a tree,
WITH FILES
find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
To add Hassou's solution to your .bashrc, try:
alias lst='ls -R | grep ":$" | sed -e '"'"'s/:$//'"'"' -e '"'"'s/[^-][^\/]*\//--/g'"'"' -e '"'"'s/^/ /'"'"' -e '"'"'s/-/|/'"'"
Since I was not too happy with the output of other (non-tree) answers (see my comment at Hassou's answer), I tried to mimic trees output a bit more.
It's similar to the answer of Robert but the horizontal lines do not all start at the beginning, but where there are supposed to start. Had to use perl though, but in my case, on the system where I don't have tree, perl is available.
ls -aR | grep ":$" | perl -pe 's/:$//;s/[^-][^\/]*\// /g;s/^ (\S)/└── \1/;s/(^ | (?= ))/│ /g;s/ (\S)/└── \1/'
Output (shortened):
.
└── fd
└── net
│ └── dev_snmp6
│ └── nfsfs
│ └── rpc
│ │ └── auth.unix.ip
│ └── stat
│ └── vlan
└── ns
└── task
│ └── 1310
│ │ └── net
│ │ │ └── dev_snmp6
│ │ │ └── rpc
│ │ │ │ └── auth.unix.gid
│ │ │ │ └── auth.unix.ip
│ │ │ └── stat
│ │ │ └── vlan
│ │ └── ns
Suggestions to avoid the superfluous vertical lines are welcome :-)
I still like Ben's solution in the comment of Hassou's answer very much, without the (not perfectly correct) lines it's much cleaner. For my use case I additionally removed the global indentation and added the option to also ls hidden files, like so:
ls -aR | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\// /g'
Output (shortened even more):
.
fd
net
dev_snmp6
nfsfs
rpc
auth.unix.ip
stat
vlan
ns
I'm prettifying the output of #Hassou's answer with:
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//──/g' -e 's/─/├/' -e '$s/├/└/'
This is much like the output of tree now:
.
├─pkcs11
├─pki
├───ca-trust
├─────extracted
├───────java
├───────openssl
├───────pem
├─────source
├───────anchors
├─profile.d
└─ssh
You can also make an alias of it:
alias ltree=$'ls -R | grep ":$" | sed -e \'s/:$//\' -e \'s/[^-][^\/]*\//──/g\' -e \'s/─/├/\' -e \'$s/├/└/\''
BTW, tree is not available in some environment, like MinGW. So the alternate is helpful.
The best answer is, of course, tree. But, to improve on other answers that rely on grepping the output of ls -R, here is a shell script that uses awk to print a tree of subdirectories. First, an example of output:
.
└── matching
├── bib
├── data
│ └── source
│ └── html
├── data
│ └── plots
├── method
│ ├── info
│ └── soft
│ ├── imgs
│ │ ├── ascii
│ │ └── symbol
│ └── js
└── ms
Then, the code:
ls -qLR 2>/dev/null \
| grep '^./' \
| sed -e 's,:$,,' \
| awk '
function tip(new) { stem = substr(stem, 1, length(stem) - 4) new }
{
path[NR] = $0
}
END {
elbow = "└── "; pipe = "│ "; tee = "├── "; blank = " "
none = ""
#
# Model each stem on the previous one, going bottom up.
for (row = NR; row > 0; row--) {
#
# gsub: count (and clean) all slash-ending components; hence,
# reduce path to its last component.
growth = gsub(/[^/]+\//, "", path[row]) - slashes
if (growth == 0) {
tip(tee)
}
else if (growth > 0) {
if (stem) tip(pipe) # if...: stem is empty at first!
for (d = 1; d < growth; d++) stem = stem blank
stem = stem elbow
}
else {
tip(none)
below = substr(stem, length(stem) - 4, 4)
if (below == blank) tip(elbow); else tip(tee)
}
path[row] = stem path[row]
slashes += growth
}
root = "."; print root
for (row = 1; row <= NR; row++) print path[row]
}
'
The code gives better-looking results than other solutions because in a tree of subdirectories, the decorations in any branch depend on the branches below it. Hence, we need to process the output of ls -R in reverse order, from the last line to the first.
Adding the below function in bashrc lets you run the command without any arguments which displays the current directory structure and when run with any path as argument, will display the directory structure of that path. This avoids the need to switch to a particular directory before running the command.
function tree() {
find ${1:-.} | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
}
This works in gitbash too.
Source: Comment from #javasheriff here
You can also use the combination of find and awk commands to print the directory tree. For details, please refer to "How to print a multilevel tree directory structure using the linux find and awk combined commands"
find . -type d | awk -F'/' '{
depth=3;
offset=2;
str="| ";
path="";
if(NF >= 2 && NF < depth + offset) {
while(offset < NF) {
path = path "| ";
offset ++;
}
print path "|-- "$NF;
}}'
Combining and extending existing answers into t shell function
t() {
find -E "${1:-.}" -maxdepth "${2:-3}" \
-not -regex ".*\/((.idea|.git|.venv|node_modules|venv)\/.*|.DS_Store)" \
| sort | sed \
-e "s/[^-][^\/]*\// ├ /g" \
-e "s/├ \//├ /g" \
-e "s/├ ├/│ ├/g" \
-e "s/├ ├/│ ├/g" \
-e "s/├ │/│ │/g" \
-e '$s/├/└/'
}
Works on Mac:
$ t
.
├ src
│ ├ .idea
│ ├ plugins
│ │ ├ .flake8
│ │ ├ .git
│ │ ├ .github
│ │ ├ .gitignore
│ │ ├ .pre-commit-config.yaml
│ │ ├ .python-version
│ │ ├ Makefile
│ │ ├ README.md
│ │ ├ buildspecs
│ │ ├ cicd
│ │ ├ cicd.py
│ │ ├ docker
│ │ ├ packages
│ │ ├ plugin_template
│ │ ├ plugins
│ │ ├ scripts
│ │ └ venv
$ t . 2
.
├ src
│ ├ .idea
│ └ plugins
$ t src/plugins/ | more
│ ├
│ ├ .flake8
│ ├ .git
│ ├ .github
│ │ ├ pull_request_template.md
│ ├ .gitignore
│ ├ .pre-commit-config.yaml
│ ├ .python-version
│ ├ Makefile
│ ├ README.md
│ ├ buildspecs
│ │ ├ test-and-deploy.yml
│ ├ cicd
:
| more can be put at the end of the function for convenience.