Cycle through search results in alacritty terminal - alacritty

Currently I can search the console buffer in alacritty using Ctrl+Shift+F, which highlights all matching results. However its not clear for me how to jump back and forth between the results? In vim for example you can do this with n for next match and N for previous. Is there something similar in alacritty?

For scroll to next, you can use Enter
For scroll to previous, you can use Shift+Enter
Another list of actions described below:
# Search Mode
#- { key: Return, mode: Search|Vi, action: SearchConfirm }
#- { key: Escape, mode: Search, action: SearchCancel }
#- { key: C, mods: Control, mode: Search, action: SearchCancel }
#- { key: U, mods: Control, mode: Search, action: SearchClear }
#- { key: W, mods: Control, mode: Search, action: SearchDeleteWord }
#- { key: P, mods: Control, mode: Search, action: SearchHistoryPrevious }
#- { key: N, mods: Control, mode: Search, action: SearchHistoryNext }
#- { key: Up, mode: Search, action: SearchHistoryPrevious }
#- { key: Down, mode: Search, action: SearchHistoryNext }
#- { key: Return, mode: Search|~Vi, action: SearchFocusNext }
#- { key: Return, mods: Shift, mode: Search|~Vi, action: SearchFocusPrevious }

Related

Two Login in Symfony 6

I had problems with tow login in Symfony 6. When I access to /admin/login I get the error "Class App\Controller\AuthenticationUtils does not exist"
here is my security.yaml
security:
encoders:
App\Entity\User:
algorithm: auto
App\Entity\AdminUser:
algorithm: auto
enable_authenticator_manager: true
# https://symfony.com/doc/current/security.html#registering-the-user-hashing-passwords
password_hashers:
Symfony\Component\Security\Core\User\PasswordAuthenticatedUserInterface: 'auto'
App\Entity\User:
algorithm: auto
App\Entity\AdminUser:
algorithm: auto
# https://symfony.com/doc/current/security.html#loading-the-user-the-user-provider
providers:
# used to reload user from session & other features (e.g. switch_user)
app_user_provider:
entity:
class: App\Entity\User
property: email
admin_user_provider:
entity:
class: App\Entity\AdminUser
property: email
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
admin:
pattern: ^/admin
lazy: true
provider: admin_user_provider
custom_authenticator: App\Security\AdminLoginFormAuthenticator
logout:
path: admin_logout
# where to redirect after logout
# TODO target: app_any_route
main:
lazy: true
provider: app_user_provider
custom_authenticator: App\Security\LoginFormAuthenticator
logout:
path: app_logout
# where to redirect after logout
# TODO target: app_any_route
# activate different ways to authenticate
# https://symfony.com/doc/current/security.html#the-firewall
# https://symfony.com/doc/current/security/impersonating_user.html
# switch_user: true
# Easy way to control access for large sections of your site
# Note: Only the *first* access control that matches will be used
access_control:
# TODO - { path: ^/admin, roles: ROLE_ADMIN }
# TODO - { path: ^/profile, roles: ROLE_USER }
And it is my AdminSecurityController.php
<?php
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Routing\Annotation\Route;
class AdminSecurityController extends AbstractController
{
/**
* #Route("/admin/security", name="admin_security")
*/
public function index(): Response
{
return $this->render('admin_security/index.html.twig', [
'controller_name' => 'AdminSecurityController',
]);
}
/**
* #Route("/admin/login", name="admin_login")
*/
public function login(AuthenticationUtils $authenticationUtils): Response
{
// get the login error if there is one
// $error = $authenticationUtils->getLastAuthenticationError();
// // last username entered by the user
// $lastUsername = $authenticationUtils->getLastUsername();
return $this->render('admin_login/index.html.twig'); //, ['last_username' => $lastUsername, 'error' => $error]
}
/**
* #Route("/admin/logout", name="admin_logout")
*/
public function logout()
{
throw new \Exception('This method can be blank - it will be intercepted by the logout key on your firewall');
}
}
I attach some images with information about the error
I have no idea about the problem, hope you can help me!!
Thank you !!
You're missing your import of the
use Symfony\Component\Security\Http\Authentication\AuthenticationUtils;
at the top of your controller

Convert Azure Pipeline condition/variable to a json boolean

Currently, I'm working on a pipeline that should call an Azure Function in a certain way, depending on the outcome/result of a previous job in that pipeline.
The Azure Function should be called when the result of previous job is either: Succeeded, SucceededWithIssues or Failed. We want to ignore Skipped and Cancelled.
The body sent to the Azure Function differs based on the result: Succeeded/SucceededWithIssues VS Failed. It only differs by a single boolean in the payload called: DeploymentFailed.
The current implementation is using two separate tasks for calling the Azure Function. This was necessary, since I couldn't find a way to convert the outcome of the previous job to a boolean.
The current pipeline as is:
trigger:
- master
parameters:
- name: jobA
default: 'A'
- name: correlationId
default: '90c7e477-2141-45db-812a-019a9f88bdc8'
pool:
vmImage: ubuntu-latest
jobs:
- job: job_${{parameters.jobA}}
steps:
- script: echo "This job could potentially fail."
- job: job_B
dependsOn: job_${{parameters.jobA}}
variables:
failed: $[dependencies.job_${{parameters.jobA}}.result]
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues', 'Failed')
pool: server
steps:
- task: AzureFunction#1
displayName: Call function succeeded
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
method: 'POST'
waitForCompletion: false
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": false # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}
- task: AzureFunction#1
displayName: Call function failed
condition: in(variables['failed'], 'Failed')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
waitForCompletion: false
method: 'POST'
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": true # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}
My question: How can I use the outcome of the previous job to only have 1 Azure Function invoke task?
You can map condition directly to variable:
variables:
failed: $[dependencies.job_${{parameters.jobA}}.result]
result: $[lower(notIn(dependencies.job_${{parameters.jobA}}.result, 'Succeeded', 'SucceededWithIssues', 'Failed'))]
and then:
- task: AzureFunction#1
displayName: Call function succeeded
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
method: 'POST'
waitForCompletion: false
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": $(result) # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}

Combine outputs of mutually exclusive processes in a Nextflow (DSL2) pipeline

I have a DSL2 workflow in Nextflow set up like this:
nextflow.enable.dsl=2
// process 1, mutually exclusive with process 2 below
process bcl {
tag "bcl2fastq"
publishDir params.outdir, mode: 'copy', pattern: 'fastq/**fastq.gz'
publishDir params.outdir, mode: 'copy', pattern: 'fastq/Stats/*'
publishDir params.outdir, mode: 'copy', pattern: 'InterOp/*'
publishDir params.outdir, mode: 'copy', pattern: 'Run*.xml'
beforeScript 'export PATH=/opt/tools/bcl2fastq/bin:$PATH'
input:
path runfolder
path samplesheet
output:
path 'fastq/Stats/', emit: bcl_ch
path 'fastq/**fastq.gz', emit: fastqc_ch
path 'InterOp/*', emit: interop_ch
path 'Run*.xml'
script:
// processing omitted
}
// Process 2, note the slightly different outputs
process bcl_convert {
tag "bcl-convert"
publishDir params.outdir, mode: 'copy', pattern: 'fastq/**fastq.gz'
publishDir params.outdir, mode: 'copy', pattern: 'fastq/Reports/*'
publishDir params.outdir, mode: 'copy', pattern: 'InterOp/*'
publishDir params.outdir, mode: 'copy', pattern: 'Run*.xml'
beforeScript 'export PATH=/opt/tools/bcl-convert/:$PATH'
input:
path runfolder
path samplesheet
output:
path 'fastq/Reports/', emit: bcl_ch
path 'fastq/**fastq.gz', emit: fastqc_ch
path 'InterOp/', emit: interop_ch
path 'Run*.xml'
script:
// processing omitted
}
// downstream process that needs either the first or the second to work, agnostic
process fastqc {
cpus 12
publishDir "${params.outdir}/", mode: "copy"
module 'conda//anaconda3'
conda '/opt/anaconda3/envs/tools/'
input:
path fastq_input
output:
path "fastqc", emit: fastqc_output
script:
"""
mkdir -p fastqc
fastqc -t ${task.cpus} $fastq_input -o fastqc
"""
}
Now I have a variable, params.bcl_convert which can be used to switch from one process to the other, and I set up the workflow like this:
workflow {
runfolder_repaired = "${params.runfolder}".replaceFirst(/$/, "/")
runfolder = Channel.fromPath(runfolder_repaired, type: 'dir')
sample_data = Channel.fromPath(params.samplesheet, type: 'file')
if (!params.bcl_convert) {
bcl(runfolder, sample_data)
} else {
bcl_convert(runfolder, sample_data)
}
fastqc(bcl.out.mix(bcl_convert.out)) // Problematic line
}
The problem lies in the problematic line: I'm not sure how (and if it is possible) to have fastqc get the input of bcl2fastq or bcl_convert (but only fastq_ch, not the rest) regardless of the process that generated it.
Some of the things I've tried include (inspired by https://github.com/nextflow-io/nextflow/issues/1646, but that one uses a the output of a process):
if (!params.bcl_convert) {
def bcl_out = bcl(runfolder, sample_data).out
} else {
def bcl_out = bcl_convert(runfolder, sample_data).out
}
fastqc(bcl_out.fastq_ch)
But this then compilation fails with Variable "runfolder" already defined in the process scope, even using the approach in a similar way as the post:
def result_bcl2fastq = !params.bclconvert ? bcl(runfolder, sample_data): Channel.empty()
def result_bclconvert = params.bclconvert ? bcl_convert(runfolder, sample_data): Channel.empty()
I thought about using conditionals in a single script, however the outputs from the two processes differ, so it's not really possible.
The only way I got it to work is by duplicating all outputs, like:
if (!params.bcl_convert) {
bcl(runfolder, sample_data)
fastqc(bcl.out.fastqc_ch)
} else {
bcl_convert(runfolder, sample_data)
fastqc(bcl_convert.out.fastqc_ch
}
However this looks to me like unnecessary complication. Is what I want to do actually possible?
I was able to figure this out, with a lot of trial and error.
Assigning a variable to a process output acts like the .out property of said process. So I set the same variable for the two exclusive processes, set the same outputs (as seen in the question) and then accessed them directly without using .out:
workflow {
runfolder_repaired = "${params.runfolder}".replaceFirst(/$/, "/")
runfolder = Channel.fromPath(
runfolder_repaired, type: 'dir')
sample_data = Channel.fromPath(
params.samplesheet, type: 'file')
if (!params.bcl_convert) {
bcl_out = bcl2fastq(runfolder, sample_data)
} else {
bcl_out = bcl_convert(runfolder, sample_data)
}
fastqc(bcl_out.fastqc_ch)
}

Can't restrict API access by positional args via external_auth SaltStack

I'm trying to restrict the calling state.apply only for specific SLS files via the pam module.
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- 'path/to/sls'
When I call the API via CherryPy API I get 401.
curl http://sat_master/run -H 'content-type: application/json' \
-d [{"tgt":"target","arg":["path/to/sls"],"kwarg":{"pillar":{"foo1":"bar1","foo2":"bar2"}},"client":"local_async","fun":"state.apply","username":"myuser","password":"<passwrod>","eauth":"pam"}]
What I also tried:
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- '.*'
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- '.*'
kwargs:
'.*' : '.*'
If I don't specify args it works:
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- state.apply
How do correctly do it?
The args field should be the field of the function object. I.e. :
Wrong:
'*':
- state.apply:
args:
- 'path/to/sls'
The JSON equivalent
{
"*": [
{
"state.apply": null,
"args": [
"path/to/sls"
]
}
]
}
Right:
'*':
- state.apply:
args:
- 'path/to/sls'
The JSON equivalent
{
"*": [
{
"state.apply": {
"args": [
"path/to/sls"
]
}
}
]
}

In Cloudformation YAML, use a Ref in a multiline string (? use Fn:Sub)

Imagine you have a aws resource such as
Resources:
IdentityPool:
Type: "AWS::Cognito::IdentityPool"
Properties:
IdentityPoolName: ${self:custom.appName}_${self:provider.stage}_identity
CognitoIdentityProviders:
- ClientId:
Ref: UserPoolClient
The Ref for "AWS::Cognito::IdentityPool" returns the id of this resource. Now lets say I want to reference that id in a multiline string. I've tried
Outputs:
AmplifyConfig:
Description: key/values to be passed to Amplify.configure(config);
Value: |
{
'aws_cognito_identity_pool_id': ${Ref: IdentityPool}, ##<------ Error
'aws_sign_in_enabled': 'enable',
'aws_user_pools_mfa_type': 'OFF',
}
I've also tried to use Fn:Sub but without luck.
AmplifyConfig:
Description: key/values to be passed to Amplify.configure(config);
Value:
Fn::Sub
- |
{
'aws_cognito_identity_pool_id': '${Var1Name}',
'aws_sign_in_enabled': 'enable',
}
- Var1Name:
Ref: IdentityPool
Any way to do this?
Using a pipe symbol | in YAML turns all of the following indented lines into a multi-line string.
A pipe, combined with !Sub will let you use:
your resources Ref return value easily like ${YourResource}
their Fn::GetAtt return values with just a period ${YourResource.TheAttribute}
any Pseudo Parameter just as is like ${AWS:region}
As easy as !Sub |, jumping to the next line and adding proper indentation. Example:
Resources:
YourUserPool:
Type: AWS::Cognito::UserPool
Properties:
UserPoolName: blabla
Outputs:
AmplifyConfig:
Description: key/values to be passed to Amplify.configure(config);
Value: !Sub |
{
'aws_cognito_identity_pool_id': '${YourUserPool}',
'aws_sign_in_enabled': 'enable',
'aws_user_pools_mfa_type': 'OFF',
}
AdvancedUsage:
Description: use Pseudo Parameters and/or resources attributes
Value: !Sub |
{
'aws_region': '${AWS::Region}',
'user_pool_arn': '${YourUserPool.Arn}',
}
I found out how to do this using Join
AmplifyConfig:
Description: key/values to be passed to Amplify.configure(config);
Value:
Fn::Join:
- ''
- - "{"
- "\n 'aws_cognito_identity_pool_id':"
- Ref : IdentityPool
- "\n 'aws_user_pools_id':"
- Ref : UserPool
- "\n 'aws_user_pools_web_client_id':"
- Ref : UserPoolClient
- ",\n 'aws_cognito_region': '${self:provider.region}'"
- ",\n 'aws_sign_in_enabled': 'enable'"
- ",\n 'aws_user_pools': 'enable'"
- ",\n 'aws_user_pools_mfa_type': 'OFF'"
- "\n}"
This works but it's kinda ugly. I'm going to leave this answer unaccepted for a while to see if anyone can show how to do this with Fn::Sub.
Using YAML you could compose this simply:
Outputs:
AmplifyConfig:
Description: key/values to be passed to Amplify.configure(config);
Value: !Sub '
{
"aws_cognito_identity_pool_id": "${IdentityPool}",
"aws_sign_in_enabled": "enable",
"aws_user_pools_mfa_type": "OFF",
}'
Leaving this here as I encountered a Base64 encoding error when doing something similar and this question came up when searching for
a solution.
In my case I was a using multi line string + !Sub to populate UserData and receiving the following error in AWS Cloudformation.
Error:
Invalid BASE64 encoding of user data. (Service: AmazonEC2; Status
Code: 400; Error Code: InvalidUserData.Malformed; Request ID: *;
Proxy: null)
Solution:
Can be solved by combining two built in Cloudformation functions; Fn::Base64 and !Sub:
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ${SomeVar}

Resources