I have configured filebeat to send different (VoIP/SMS) csv files to logstash. However, only VoIP .csv files get shipped to logstash.
Csv files are under different folders.
logs/sms
logs/voip
I had another issue, described in this stack post. I managed to partially sort that out by creating tags in filebeat for these .csvs.
pwd
/usr/share/filebeat/logs
ls -ltr
drwxr-xr-x 2 root root 106496 Dec 4 03:39 sms
drwxr-xr-x 2 root root 131072 Dec 8 01:49 voip
ls -ltr voip | head -4
-rw-r--r-- 1 root root 7933 Dec 4 03:39 sms_cdr_1010.csv
-rw-r--r-- 1 root root 7974 Dec 4 03:39 sms_cdr_101.csv
-rw-r--r-- 1 root root 7949 Dec 4 03:39 sms_cdr_1009.csv
ls -ltr voip | head -4
-rw-r--r-- 1 root root 11616 Dec 4 03:39 voip_cdr_10.csv
-rw-r--r-- 1 root root 11533 Dec 4 03:39 voip_cdr_1.csv
-rw-r--r-- 1 root root 11368 Dec 4 03:39 voip_cdr_0.csv
Filebeat only starts harvesting voip .csvs
2019-12-08T02:37:18.872Z INFO crawler/crawler.go:72 Loading Inputs: 1
2019-12-08T02:37:18.872Z INFO log/input.go:138 Configured paths: [/usr/share/filebeat/logs/voip/*]
2019-12-08T02:37:18.872Z INFO input/input.go:114 Starting input of type: log; ID: 801046369164835837
2019-12-08T02:37:18.872Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-12-08T02:37:18.977Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_185.csv
2019-12-08T02:37:18.978Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_2809.csv
2019-12-08T02:37:18.979Z INFO log/harvester.go:255 Harvester started for file: /usr/share/filebeat/logs/voip/voip_cdr_2847.csv
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- logs/sms/*
tags: ["sms"]
paths:
- logs/voip/*
tags: ["voip"]
output.logstash:
enabled: true
hosts: ["logstash:5044"]
logging.to_files: true
logging.files:
logstash.conf
input {
beats {
port => "5044"
}
}
filter {
if "sms" in [tags] {
csv {
columns => ['Date', 'Time', 'PLAN', 'CALL_TYPE', 'MSIDN', 'IMSI', 'IMEI']
separator => ","
skip_empty_columns => true
quote_char => "'"
}
}
if "voip" in [tags] {
csv {
columns => ['Record_Nb', 'Date', 'Time', 'PostDialDelay', 'Disconnect-Cause', 'Sip-Status','Session-Disposition', 'Calling-RTP-Packets-Lost','Called-RTP-Packets-Lost', 'Calling-RTP-Avg-Jitter','Called-RTP-Avg-Jitter', 'Calling-R-Factor', 'Called-R-Factor', 'Calling-MOS', 'Called-MOS', 'Ingress-SBC', 'Egress-SBC', 'Originating-Trunk-Group', 'Terminating-Trunk-Group']
separator => ","
skip_empty_columns => true
quote_char => "'"
}
}
}
output {
if "sms" in [tags] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "smscdr_index"
}
stdout {
codec => rubydebug
}
}
if "voip" in [tags] {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "voipcdr_index"
}
stdout {
codec => rubydebug
}
}
}
try below configuration,
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/logs/sms/*.csv
tags: ["sms"]
paths:
- /usr/share/filebeat/logs/voip/*.csv
tags: ["voip"]
output.logstash:
enabled: true
hosts: ["logstash:5044"]
logging.to_files: true
logging.files:
Related
there
I monitor remote Linux with Logstash and SNMP. When I try to get interfaces or ifSpeed, everthing is OK. But when I try to get sysDescr, CPU storage and memory storage, I cannot get any data back!
I dont know why. The logstash log seems normal, too.
The logstash.conf:
input {
snmp {
tables => [
{
"name" => "sysDescr"
"columns" => ["1.3.6.1.2.1.1.1.0"]
}
]
hosts => [{
host => "udp:192.168.131.125/161"
community => "laundry"
version => "2c"
}
]
interval => 5
type => "snmp"
}
beats {
port => 5044
add_field => {"type" => "beat"}
}
tcp {
port => 50000
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "beat" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "beat-logs"
}
}
if [type] == "snmp" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "snmp-logs"
}
}
}
the logstash log is:
root#laundry:/opt/ground/management# docker logs -f -t -n=5 5ae67e146ab0
2023-02-03T02:35:04.639861138Z [2023-02-03T10:35:04,639][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
2023-02-03T02:35:04.873655686Z [2023-02-03T10:35:04,873][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
2023-02-03T02:35:04.885933029Z [2023-02-03T10:35:04,884][INFO ][logstash.inputs.tcp ][main][06f1d7ee5445cc0e11cda56012ef6767600f21acd6133e02e957f761d26bac84] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enable=>false}
2023-02-03T02:35:04.934224084Z [2023-02-03T10:35:04,933][INFO ][org.logstash.beats.Server][main][4b91981ecb09a5d2
the output of snmpwalk and snmpget:
root#laundry:/opt/ground/management# snmpwalk -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"
root#laundry:/opt/ground/management# snmpget -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"
I have log file having below format to extract into elastic search, but logstash filtered data not pushing into elastic search.
Same grok filtered configuration am able to get it from kibana devtools
Sample logfile:
OCDE - 2019-05-22 13:24:34.000 ERROR org.ramyam.ocde.task.NBALookupTask.checkResponsesToBeProcessed - checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019
Filebeat configuration:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\data\logs\OCDE.log
document_type: ocde
logstash configuration:
input {
file {
type => "ocde"
path => "C:\data\logs\OCDE.log"
}
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ "message" ,'%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}']
}
}
output {
if [type]=="ocde"
{
elasticsearch
{
hosts => ["localhost:9200"]
#manage_template => false
index => "enliven_be_log_yyyymmdd"
document_type=> ocde
}
}
}
I am expecting below result from an above configuration in elastic search
{
"level": "ERROR",
"loggerTime": "2019-05-22 13:24:34.000",
"moduleName": "OCDE",
"methodName": "checkResponsesToBeProcessed",
"className": "org.ramyam.ocde.task.NBALookupTask",
"loggermsg": "checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019"
}
Can anyone please explain or share sample configuration what I am missing
You can try below grok pattern -
%{DATA:moduleName}%{SPACE}*-%{SPACE}*%{TIMESTAMP_ISO8601:loggerTime}%{SPACE}*%{LOGLEVEL:level}%{SPACE}*%{JAVACLASS:className}\.%{DATA:methodName}%{SPACE}*-%{SPACE}*%{GREEDYDATA:loggermsg}
Change your grok from:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}
to:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}
To validate this, use http://grokdebug.herokuapp.com/ and paste the log message you provided into the "
Your pattern works fine, you just had one extra bracket at the end.
It is working for auth.log but not working for authcopy.log. There is no error message. There is no output.
This is working.
sudo /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/auth.log" } }'
output:
{
"#version" => "1",
"host" => "removed",
"path" => "/var/log/auth.log",
"#timestamp" => 2018-01-10T23:51:39.912Z,
"message" => "Jan 10 20:17:55 removed sudo: pam_unix(sudo:session): session closed for user root"
}
...
This is not working.
sudo /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/authcopy.log" } }'
There is no error message. There is no output.
Copied auth.log to authcopy.log
sudo cp /var/log/auth.log /var/log/authcopy.log
sudo chmod 777 /var/log/authcopy.log
ls -l /var/log/auth*.log
-rwxrwxrwx 1 root root 391617 Jan 10 19:30 /var/log/authcopy.log
-rw-r----- 1 syslog adm 395465 Jan 10 20:13 /var/log/auth.log
Host: ubuntu latest x64
vagrant 1.7.2
Configuration (puphpet)
Yaml config:
vagrantfile:
target: local
vm:
box: puphpet/debian75-x64
box_url: puphpet/debian75-x64
hostname: developer
memory: '1024'
cpus: '1'
chosen_provider: virtualbox
network:
private_network: 192.168.56.2
forwarded_port:
vflnp_o959ky5yk541:
host: '8592'
guest: '22'
post_up_message: ''
provider:
virtualbox:
modifyvm:
natdnshostresolver1: on
showgui: '0'
vmware:
numvcpus: 1
parallels:
cpus: 1
provision:
puppet:
manifests_path: puphpet/puppet
manifest_file: site.pp
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
- '--parser future'
synced_folder:
vflsf_xjd88rswg95m:
source: ../Presta/www
target: /var/www/presta
sync_type: nfs
smb:
smb_host: ''
smb_username: ''
smb_password: ''
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
# owner: www-data
# group: www-data
vflsf_htm6wvj2khq1:
source: '../Karty_Pracy/www'
target: /var/www/worksheets
sync_type: nfs
smb:
smb_host: ''
smb_username: ''
smb_password: ''
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
# owner: www-user
# group: www-data
vfawf_htsadas3khq1:
source: '../Magazyn/www'
target: /var/www/warehouse
sync_type: nfs
smb:
smb_host: ''
smb_username: ''
smb_password: ''
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
# owner: www-user
# group: www-data
# map_uid: 0
# map_gid: 0
usable_port_range:
start: 10200
stop: 10500
ssh:
host: null
port: null
private_key_path: null
username: vagrant
guest_port: null
keep_alive: true
forward_agent: false
forward_x11: false
shell: 'bash -l'
vagrant:
host: detect
server:
install: '1'
packages:
- htop
- zsh
- git
- mc
- unzip
- zip
- unrar
users_groups:
install: '1'
groups: { }
users: { }
firewall:
install: '1'
rules: { }
cron:
install: '1'
jobs: { }
nginx:
install: '0'
settings:
default_vhost: 1
proxy_buffer_size: 128k
proxy_buffers: '4 256k'
upstreams: { }
vhosts:
nxv_nwz3nbjvoere:
server_name: awesome.dev
server_aliases:
- www.awesome.dev
www_root: /var/www/awesome
listen_port: '80'
index_files:
- index.html
- index.htm
- index.php
client_max_body_size: 1m
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_port: '443'
ssl_protocols: ''
ssl_ciphers: ''
rewrite_to_https: '1'
spdy: '1'
locations:
nxvl_nfl6ndp0s1h1:
location: /
autoindex: off
try_files:
- $uri
- $uri/
- /index.php$is_args$args
fastcgi: ''
fastcgi_index: ''
fastcgi_split_path: ''
nxvl_oy9dxc91j6zf:
location: '~ \.php$'
autoindex: off
try_files:
- $uri
- $uri/
- /index.php$is_args$args
fastcgi: '127.0.0.1:9000'
fastcgi_index: index.php
fastcgi_split_path: '^(.+\.php)(/.*)$'
fast_cgi_params_extra:
- 'SCRIPT_FILENAME $request_filename'
- 'APP_ENV dev'
proxies: { }
apache:
install: '1'
settings:
user: www-data
group: www-data
default_vhost: true
manage_user: false
manage_group: false
sendfile: 0
modules:
- proxy_fcgi
- rewrite
vhosts:
presta:
servername: presta.dev
serveraliases:
- www.presta.dev
docroot: /var/www/presta
port: '80'
setenv:
- 'APP_ENV dev'
custom_fragment: ''
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
ssl_protocol: ''
ssl_cipher: ''
directories:
avd_hycw94gg20u6:
path: /var/www/presta
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
require:
- 'all granted'
custom_fragment: ''
files_match:
avdfm_0yv1i2x4gd9j:
path: \.php$
sethandler: 'proxy:fcgi://127.0.0.1:9000'
custom_fragment: ''
provider: filesmatch
provider: directory
worksheets:
servername: worksheets.dev
serveraliases:
- www.worksheets.dev
docroot: /var/www/worksheets/web
port: '80'
setenv:
- APP_ENV_dev
custom_fragment: ''
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
ssl_protocol: ''
ssl_cipher: ''
directories:
avd_88fsnn8di1c5:
path: /var/www/worksheets/web
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
require:
- 'all granted'
custom_fragment: ''
files_match:
avdfm_ro0fe2wzpcad:
path: \.php$
sethandler: 'proxy:fcgi://127.0.0.1:9000'
custom_fragment: ''
provider: filesmatch
provider: directory
warehouse:
servername: warehouse.dev
serveraliases:
- www.warehouse.dev
docroot: /var/www/warehouse/web
port: '80'
setenv:
- APP_ENV_dev
custom_fragment: ''
ssl: '0'
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
ssl_protocol: ''
ssl_cipher: ''
directories:
avd_88fsnn8diaw5:
path: /var/www/warehouse/web
options:
- Indexes
- FollowSymlinks
- MultiViews
allow_override:
- All
require:
- 'all granted'
custom_fragment: ''
files_match:
avdfm_ro0fe2wzpcad:
path: \.php$
sethandler: 'proxy:fcgi://127.0.0.1:9000'
custom_fragment: ''
provider: filesmatch
provider: directory
php:
install: '1'
settings:
version: '56'
modules:
php:
- cli
- intl
- mcrypt
- curl
- cgi
- gd
- imagick
- mysql
- mysqlnd
- sqlite
pear: { }
pecl:
- pecl_http
ini:
display_errors: On
error_reporting: '-1'
session.save_path: /var/lib/php/session
date.timezone: UTC
fpm_ini:
error_log: /var/log/php-fpm.log
fpm_pools:
phpfp_tl7whm0zxnuj:
ini:
prefix: www
listen: '127.0.0.1:9000'
security.limit_extensions: .php
user: www-user
group: www-data
composer: '1'
composer_home: ''
xdebug:
install: '1'
settings:
xdebug.default_enable: '1'
xdebug.remote_autostart: '1'
xdebug.idekey: PHPStorm
xdebug.remote_connect_back: '1'
xdebug.remote_enable: '1'
xdebug.remote_handler: dbgp
xdebug.remote_port: '9000'
blackfire:
install: '0'
settings:
server_id: ''
server_token: ''
agent:
http_proxy: ''
https_proxy: ''
log_file: stderr
log_level: '1'
php:
agent_timeout: '0.25'
log_file: ''
log_level: '1'
xhprof:
install: '0'
wpcli:
install: '0'
version: v0.19.0
drush:
install: '0'
version: 6.3.0
ruby:
install: '1'
versions: { }
python:
install: '1'
packages: { }
versions: { }
nodejs:
install: '1'
npm_packages:
- bower
hhvm:
install: '0'
nightly: 0
composer: '1'
composer_home: ''
settings: { }
server_ini:
hhvm.server.host: 127.0.0.1
hhvm.server.port: '9000'
hhvm.log.use_log_file: '1'
hhvm.log.file: /var/log/hhvm/error.log
php_ini:
display_errors: On
error_reporting: '-1'
date.timezone: UTC
mysql:
install: '1'
settings:
version: '5.6'
root_password: presta
override_options: { }
adminer: '1'
users:
mysqlnu_9vxflivbus3n:
name: presta
password: presta
mysqlnu_uvdxcu9kcnmf:
name: worksheets
password: worksheets
mysqlnu_9vxfladvbus3n:
name: warehouse
password: warehouse
databases:
mysqlnd_0eud6qyvgftl:
name: presta
sql: ''
mysqlnd_310bhtyb1ezk:
name: worksheets
sql: ''
mysqlnd_310bhaddsdasdw:
name: warehouse
sql: ''
grants:
mysqlng_er3ka00fh3xm:
user: presta
table: '*.*'
privileges:
- ALL
mysqlng_l0g86y9hymun:
user: worksheets
table: '*.*'
privileges:
- ALL
mysqlng_l0asdy9hymun:
user: warehouse
table: '*.*'
privileges:
- ALL
postgresql:
install: '0'
settings:
global:
encoding: UTF8
version: '9.3'
server:
postgres_password: '123'
databases: { }
users: { }
grants: { }
adminer: 0
mongodb:
install: '0'
settings:
auth: 1
bind_ip: 127.0.0.1
port: '27017'
databases: { }
redis:
install: '0'
settings:
conf_port: '6379'
sqlite:
install: '1'
adminer: 0
databases: { }
mailcatcher:
install: '1'
settings:
smtp_ip: 0.0.0.0
smtp_port: 1025
http_ip: 0.0.0.0
http_port: '1080'
mailcatcher_path: /usr/local/rvm/wrappers/default
from_email_method: headers
beanstalkd:
install: '0'
settings:
listenaddress: 0.0.0.0
listenport: '13000'
maxjobsize: '65535'
maxconnections: '1024'
binlogdir: /var/lib/beanstalkd/binlog
binlogfsync: null
binlogsize: '10485760'
beanstalk_console: 0
rabbitmq:
install: '0'
settings:
port: '5672'
users: { }
vhosts: { }
plugins: { }
elastic_search:
install: '0'
settings:
version: 1.4.1
java_install: true
solr:
install: '0'
settings:
version: 4.10.2
port: '8984'
Issue:
$ php app/console fos:user:change-password admin admin
[Symfony\Component\Debug\Exception\ContextErrorException] Warning:
chmod(): Operation not permitted
fos:user:change-password
On web issue expects too.
Permissions looks good:
$ ls -la
total 224
drwxrwxrwx 12 www-data www-data 4096 Jul 2 12:57 ./
drwxrwxr-x 6 root www-data 4096 Jul 3 06:39 ../
drwxrwxrwx 6 www-data www-data 4096 Jul 2 13:26 app/
drwxrwxrwx 2 www-data www-data 4096 Jun 26 10:51 bin/
-rwxrwxrwx 1 www-data www-data 362 Jul 2 12:57 bower.json*
-rwxrwxrwx 1 www-data www-data 35 Jul 2 12:57 .bowerrc*
-rwxrwxrwx 1 www-data www-data 3340 Jul 2 12:57 composer.json*
-rwxrwxrwx 1 www-data www-data 154557 Jul 2 09:50 composer.lock*
drwxrwxrwx 2 www-data www-data 4096 Jul 2 12:57 doc/
drwxrwxrwx 3 www-data www-data 4096 Jul 2 10:25 files/
drwxrwxrwx 8 www-data www-data 4096 Jul 3 06:47 .git/
-rwxrwxrwx 1 www-data www-data 164 Jul 2 12:57 .gitignore*
drwxrwxrwx 3 www-data www-data 4096 Jul 3 06:26 .idea/
drwxrwxrwx 3 www-data www-data 4096 Jul 2 12:57 src/
-rwxrwxrwx 1 www-data www-data 595 Jul 2 12:57 sync.sh*
-rwxrwxrwx 1 www-data www-data 245 Jul 2 12:57 TODO*
drwxrwxrwx 3 www-data www-data 4096 Jul 2 10:54 var/
drwxrwxrwx 37 www-data www-data 4096 Jul 2 09:59 vendor/
drwxrwxrwx 10 www-data www-data 4096 Jul 2 13:01 web/
So, this looks like bad mounting nfs(can't use chmod) How to solve this?
So, this looks like bad mounting nfs(can't use chmod) How to solve this?
You must treat the virtual box-side of a sync'ed folder as 'read-only.' Run commands which modify the contents of a sync'ed folder instead on the host computer itself (your Mac); for example, php composer.phar install. Remote OS's (virtual machines) will cause "Operation not permitted" errors when they try to change files originating on the host operating system.
For reference, see my answer to a similar question at this link:
The Operation not permitted problem...
There's another answer that discusses the importance of treating synced_folders as read-only file locations on your Virtual Machines.
How to use vagrant on multiple projects
You must run this command fos:user:change-password from /vagrant folder (not from /var/www/presta)
It might be related to this issue.
The solution is:
vagrant plugin uninstall vagrant-bindfs
vagrant reload
Hope it helps.
I'm playing around with Ghost, and I'd like to make the gruntfile compile the sass files from my theme.
So I started by modifying the sass task:
...
sass: {
admin: {
files: {
'<%= paths.adminAssets %>/css/screen.css': '<%= paths.adminAssets %>/sass/screen.scss'
}
},
themes: {
files:{
'content/themes/**/css/ie.css': 'content/themes/**/src/sass/ie.sass',
'content/themes/**/css/print.css': 'content/themes/**/src/sass/print.sass',
'content/themes/**/css/screen.css': 'content/themes/**/src/sass/screen.sass'
}
}
}
...
I realised that I could simplfy this to :
...
sass: {
admin: {
files: {
'<%= paths.adminAssets %>/css/screen.css': '<%= paths.adminAssets %>/sass/screen.scss'
}
},
themes: {
files:{
'content/themes/**/css/*.css': 'content/themes/**/src/sass/*.sass',
}
}
}
...
But then I was thinking, why isn't it replacing the stars in the destination with what it matches from the source?
Ends up it was just creating the following:
$ ls -al ./content/themes/
total 0
drwxrwxr-x 1 zenobius zenobius 50 Nov 18 02:10 .
drwxrwxr-x 1 zenobius zenobius 46 Nov 15 11:02 ..
drwxrwxr-x 1 zenobius zenobius 6 Nov 18 02:10 ** <----- sigh
drwxrwxr-x 1 zenobius zenobius 128 Nov 15 11:02 casper
drwxrwxr-x 1 zenobius zenobius 250 Nov 18 00:08 crycilium
I guess my question is really:
can i use some kind of regex named patterns
could I use a function in the files option to process the output name as the destination?
So the solution was to make use of grunt.file.expandMapping, (thanks to : https://stackoverflow.com/a/16672303/454615):
...
themes: {
files: grunt.file.expandMapping([
"content/themes/**/src/**/*.sass",
"!content/themes/**/src/**/_*.sass",
], '', {
expand: true,
ext: '.css',
rename: function(base, src) {
grunt.log.write(base + " " + src);
return src.replace('/src/', '/../'); // or some variation
}
})
}
...