I am writing a code where I am trying to load config.yaml file
impl ::std::default::Default for MyConfig {
fn default() -> Self { Self { foo: "".into(), conf: vec![] } }
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MyConfig {
foo: String,
conf: Vec<String>
}
let cfg: MyConfig = confy::load("config")?;
println!("{:#?}", cfg);
Config.yaml file
foo: "test"
conf:
gg
gb
gg
bb
Output
MyConfig {
url: "",
jobs: [],
}
I have kept the config.yaml file in the same folder where it is getting called. It looks like it is not able to load the file itself. What is getting missed there?
EDIT: When I changed the extension from yaml to toml, and provided full path, it found the file but the structure is expecting is
config.toml
foo = "test"
conf = ["gg","gb","gv","gx"]
full path
confy::load("c:/test/config")?;
Tried multiple places to keep it but not getting it, looks like it requires full path.
But I got the output
MyConfig {
url: "test",
jobs: [
"gg",
"gb",
"gv",
"gx",
],
}
While David Chopin's answer is correct that the YAML is not right, there is a couple of deeper issues.
Firstly, while it is not really documented, looking at the confy source, it expects TOML formatted data, not YAML - for simple cases they can be similar I think. (Turns out this is not 100% correct - the github page says you can switch to YAML this using features = ["yaml_conf"] in the Cargo.toml file)
Secondly, I'm guessing the root problem is that confy is not finding your configuration file.
The docs for confy::load state:
Load an application configuration from disk
A new configuration file is created with default values if none exists.
So, I think it's looking somewhere else, not finding your file and instead of erroring creating a nice default file in that location then returning that default for you.
I believe it should be formatted as follows:
foo: "test"
conf:
- gg
- gb
- gg
- bb
Related
I am trying to generate a file by template rendering to pass to the user data of the ec2 instance. I am using the third party terraform provider to generate an ignition file from the YAML.
data "ct_config" "worker" {
content = data.template_file.file.rendered
strict = true
pretty_print = true
}
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = file("${path.module}/script.sh")
}
}
example.yml
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
Error:
Error: Error unmarshaling yaml: yaml: line 187: could not find expected ':'
on ../../modules/launch_template/launch_template.tf line 22, in data "ct_config" "worker":
22: data "ct_config" "worker" {
If I change ${script} to sample data then it works. Also, No matter what I put in the script.sh I am getting the same error.
You want this outcome (pseudocode):
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
{{content of script file}}
In your current implementation, all lines after the first loaded from script.sh will not be indented and will not be interpreted as desired (the entire script.sh content) by a YAML decoder.
Using indent you can correct the indentation and using the newer templatefile functuin you can use a slightly cleaner setup for the template:
data "ct_config" "worker" {
content = local.ct_config_content
strict = true
pretty_print = true
}
locals {
ct_config_content = templatefile("${path.module}/example.yml", {
script = indent(10, file("${path.module}/script.sh"))
})
}
For clarity, here is the example.yml template file (from the original question) to use with the code above:
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: |
${script}
I had this exact issue with ct_config, and figured it out today. You need to base64encode your script to ensure it's written correctly without newlines - without that, newlines in your script will make it to CT, which attempts to build an Ignition file, which cannot have newlines, causing the error you ran into originally.
Once encoded, you then just need to tell CT to !!binary the file to ensure Ignition correctly base64 decodes it on deploy:
data "template_file" "file" {
...
...
template = file("${path.module}/example.yml")
vars = {
script = base64encode(file("${path.module}/script.sh"))
}
}
storage:
files:
- path: "/opt/bin/script"
mode: 0755
contents:
inline: !!binary |
${script}
I'm trying to implement log4rs by following the docs. My goal is to put the result of info!("INFO") into the file requests.log, but I get an error:
thread 'main' panicked at 'called Result::unwrap() on an Err
value: Log4rs(Os { code: 2, kind: NotFound, message: "No such file or
directory" })', libcore/result.rs:945:5
I have the following files in the src folder:
- main.rs
- log4rs.yml
- requests.log
main.rs:
#[macro_use]
extern crate log;
extern crate log4rs;
fn main() {
println!("Hello, world!");
log4rs::init_file("log4rs.yml", Default::default()).unwrap();
info!("INFO");
}
the config file log4rs.yml:
# Scan this file for changes every 30 seconds
refresh_rate: 30 seconds
appenders:
# An appender named "stdout" that writes to stdout
stdout:
kind: console
# An appender named "requests" that writes to a file with a custom pattern encoder
requests:
kind: file
path: "requests.log"
encoder:
pattern: "{d} - {m}{n}"
# Set the default logging level to "warn" and attach the "stdout" appender to the root
root:
level: warn
appenders:
- stdout
loggers:
# Raise the maximum log level for events sent to the "app::backend::db" logger to "info"
app::backend::db:
level: info
# Route log events sent to the "app::requests" logger to the "requests" appender,
# and *not* the normal appenders installed at the root
app::requests:
level: info
appenders:
- requests
additive: false
When you type cargo run, your working directory is the current directory. This means that all your relative paths will depend on this working directory.
For example, if you are in your home directory (~) and you have your project folder named foo. When you go in it, that gives you ~/foo. If you now type cargo run, that means that when log4rs tries to open your file it will try to open the file ~/foo/log4rs.yml. The file is not here but in ~/foo/src/log4rs.yml
You have many solutions:
log4rs::init_file("src/log4rs.yml", Default::default()).unwrap();
move log4rs.yml to foo
use an absolute path (not a good solution for your current case)
I want to write a DSL job build script for jenkins in groovy which automatically make deploy job for our projects. There is a general yml file for ansible roles and hosts parameter in each project which I want to read it and use its contents to configure the job.
The problem is that so far I'm using the snakeyml for reading the yml file, but it returns an arraylist (more like a map) which I cannot use efficiently.
anyone knows a better solution?
my yml sample file:
---
- hosts: app.host
roles:
- role: app-db
db_name: myproje_db
db_port: "3306"
migrate_module: "my-proje-api"
- role: java-app
app_name: "myproje-api"
app_artifact_name: "my-proje-api"
app_links:
- myproje_db
I read the file from workspace in my main groovy script:
InputStream configFile = streamFileFromWorkspace('data/config.yml')
and process it in another function of another class:
public String configFileReader(def out, InputStream configFile){
def map
Yaml configFileYml = new Yaml()
map = configFileYml.load(configFile)
}
it returns map class type as arraylist.
It's an expected output, this configuration is starting with a "-" which represent a list. It's "a collection of hosts, and each host have a set of roles".
If you wants to iterate on each host, you can do :
Yaml configFileYml = new Yaml()
configFileYml.load(configFile).each { host -> ... }
When this configuration is read, it's equivalent to the following structure (in groovy format):
[ // collection of map (host)
[ // 1 map for each host
hosts:"app.host",
roles:[ // collection of map (role)
[ // 1 map for each role
role: 'app-db',
db_name: 'myproje_db',
db_port: "3306",
migrate_module: "my-proje-api"
],
[
role: 'java-app',
app_name: "myproje-api",
app_artifact_name: "my-proje-api",
app_links:['myproje_db']
]
]
]
]
I use r.js to cobble together all the js code in my SPA into 1 file. I use grunt's `grunt-contrib-requirejs' task for this, with the following:
requirejs: {
compile: {
options: {
name: 'app',
out: 'build/js/app.js',
baseUrl: 'app',
mainConfigFile: 'config/main.js',
preserveLicenseComments: true,
optimize: "none"
}
}
}
I also use a build task that zips the build folder into a zip file for me to send to our company's change management folks.
I would like to have two requirejs tasks - one that uglifies (for sending to CM) and one that doesn't (during development). Is this possible? I tried creating a new task with a different name and grunt yelled at me... should be simple. Is this possible? Are there any reasons not to do this?
Thanks in advance!
Actually it is very simple:
requirejs: {
compile: {
options: {
...
optimize: "none"
}
},
compileForProduction: {
options: {
...
optimize: "uglify2"
}
}
}
(options are same as yours, with any diffs between the two that are required, e.g. optimize)
Run it with:
grunt requirejs:compileForProduction
or in Gruntfile.js:
grunt.registerTask("prod", ["requirejs:compileForProduction"]);
and:
grunt prod
My Brunch template compiles all my code into app.js and all third party dependencies into vendor.js (a pretty standard approach). I'd like to do the same with CSS and it used to work but as I moved to using Bower something stopped working and I now get the following error:
Error: couldn't load config /path-to-root/config.coffee. SyntaxError: unexpected {
at Object.exports.loadConfig (/usr/local/share/npm/lib/node_modules/brunch/lib/helpers.js:448:15)
from a configuration file (config.cofee) that looks like this:
files:
javascripts:
joinTo:
'javascripts/app.js': /^app/
'javascripts/vendor.js': /^(bower_components|vendor)/
'test/javascripts/test-vendor.js': /^test(\/|\\)(?=vendor)/
stylesheets:
joinTo:
'stylesheets/app.css': /^app/
'stylesheets/vendor.css': /^(bower_components|vendor)/
If I instead just strip out the two lines for stylesheets and put this single line in its place it works without error:
'stylesheets/vendor.css': /^(app|bower_components|vendor)/
I've been sort of living with this but this is causing more and more problems and I'd like to get it sorted. Any help would be greatly appreciated.
In case the question comes up ... the version of brunch I'm using is 1.7.6.
I am baffled but I think Paul's suggestion that maybe a special character had gotten into the file seems likely. I now have it working with a configuration that appears to be identical to what was NOT working earlier. Here's the full configuration file:
sysPath = require 'path'
exports.config =
# See http://brunch.io/#documentation for documentation.
files:
javascripts:
joinTo:
'javascripts/app.js': /^app/
'javascripts/vendor.js': /^(bower_components|vendor)/
'test/javascripts/test-vendor.js': /^test(\/|\\)(?=vendor)/
stylesheets:
joinTo:
'stylesheets/app.css': /^app/
'stylesheets/vendor.css': /^(bower_components|vendor)/
templates:
precompile: true
root: 'templates'
joinTo: 'javascripts/app.js' : /^app/
modules:
addSourceURLs: true
# allow _ prefixed templates so partials work
conventions:
ignored: (path) ->
startsWith = (string, substring) ->
string.indexOf(substring, 0) is 0
sep = sysPath.sep
if path.indexOf("app#{sep}templates#{sep}") is 0
false
else
startsWith sysPath.basename(path), '_'
It's pretty weird but I had to do the following (add / at the end) for the same case
stylesheets: {
joinTo: {
'css/vendor.css': /^(vendor|bower_components)\//,
'css/styles.css': /^app\/css\//
}
}
I had the same problem as Ken. What solved it for me is just deleting the offending lines from the config.coffeefile and simply just re-typing them again from scratch. This ensures no hidden characters are present and makes the script running again.