Puppet no implicit conversion of string into integer - puppet

I am using puppet to manage a varnish server with multiple backends. I am trying to create a loop so that additional backends can be added at a later date. So far I have the following in the erb file:
<% #backends.each do |backend| -%>
backend <%= backend['backend_name'] %> {
.host = "<%= #backend_addr %>";
.port = "<%= backend['backend_port'] %>";
.connect_timeout = 600s;
.first_byte_timeout = 600s;
.between_bytes_timeout = 600s;
}
<% end -%>
But when this is run I get the error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to parse template varnish/drupal.vcl.erb:
Filepath: /etc/puppet/modules/varnish/templates/drupal.vcl.erb
Line: 17
Detail: no implicit conversion of String into Integer
at /etc/puppet/modules/varnish/manifests/init.pp:22 on node x.x.x.x
What am I doing wrong?

Related

varnish is unable to take hold of port 80 on ec2 machine

I am trying to run varnish in a docker container in an ec2 instance.
I tried doing the same in my local and it worked fine. but it keep on giving error:
Error: Could not get socket :80: Permission denied
My vcl looks like:
vcl 4.0;
backend default {
.host = "x.y.z.y";
.port = "8090";
}
sub vcl_recv {
if (req.method == "BAN") {
ban("obj.http.x-host == " + req.http.host + " && obj.http.x-url ~ " + req.url);
return(synth(200, "Banned added"));
}
}
sub vcl_backend_response {
# Store URL and HOST in the cached response.
set beresp.http.x-url = bereq.url;
set beresp.http.x-host = bereq.http.host;
}
sub vcl_deliver {
# Prevent the client from seeing these additional headers.
unset resp.http.x-url;
unset resp.http.x-host;
}
sub vcl_deliver {
# Prevent the client from seeing these additional headers.
unset resp.http.x-url;
unset resp.http.x-host;
}
and there is no process running on 80 port
To access 80 port requires root permission, try to run docker command from root user or add your user to docker group.

SocketIO VUe app not able to connect to backend getting status 400

I have a Vue.js application that uses Socket.IO and am able to run it locally but not in a prod setup with S3 and a public socket server. The Vue.js dist/ build is on AWS S3 set up in a public website format. I have DNS and SSL provided by Cloudflare for the S3 bucket.
My socket.io server is running in a Kubernetes cluster that is created using KOPS on AWS. I have a network load balancer in front of it with the ingress being Nginx-ingress. I have added a few annotations as I have been debugging those are at the bottom of the annotations section below.
Error message:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is established.
Issue: I am trying to get my front end to connect to the socket.io server to send messages back and forth. However, I can't due to the above error message. I am looking to figure out what is wrong that is causing this error message.
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
# add an annotation indicating the issuer to use.
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
# needed to allow the front end to talk to the back end
nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.domain.ca"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, PATCH, OPTIONS"
# needed for monitoring - maybe
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
#for nginx ingress controller
ad.datadoghq.com/nginx-ingress-controller.check_names: '["nginx","nginx_ingress_controller"]'
ad.datadoghq.com/nginx-ingress-controller.init_configs: '[{},{}]'
ad.datadoghq.com/nginx-ingress-controller.instances: '[{"nginx_status_url": "http://%%host%%:18080/nginx_status"},{"prometheus_url": "http://%%host%%:10254/metrics"}]'
ad.datadoghq.com/nginx-ingress-controller.logs: '[{"service": "controller", "source":"nginx-ingress-controller"}]'
# Allow websockets to work
nginx.ingress.kubernetes.io/websocket-services: socketio
nginx.org/websocket-services: socketio
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
name: socketio-ingress
namespace: domain
spec:
rules:
- host: socket.domain.ca
http:
paths:
- backend:
serviceName: socketio
servicePort: 9000
path: /
tls:
- hosts:
- socket.domain.ca
secretName: socket-ingress-cert
socket io part of server.js
const server = http.createServer();
const io = require("socket.io")(server, {
cors: {
origin: config.CORS_SOCKET, // confirmed this is -- https://app.domain.ca -- via a console.log
},
adapter: require("socket.io-redis")({
pubClient: redisClient,
subClient: redisClient.duplicate(),
}),
});
vue.js main
const socket = io(process.env.VUE_APP_SOCKET_URL)
Vue.use(new VueSocketIO({
debug: true,
connection: socket,
vuex: {
store,
actionPrefix: "SOCKET_",
mutationPrefix: "SOCKET_"
}
})
);

Zingchart PHP wrapper issue

I am attempting to set up Zingchart using the PHP wrapper module.
Server:
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
Running as a VM under VMWare
I am receiving an error with the example code when calling the ZC->connect method.
Thus:
<HTML>
<HEAD>
<script src="zingchart.min.js"></script>
</HEAD>
<BODY>
<h3>Simple Bar Chart (Database)</h3>
<div id="myChart"></div>
<?php
include 'zc.php';
use ZingChart\PHPWrapper\ZC;
$servername = "localhost";
$username = "nasuser";
$password = "password";
$db = "nas";
$myQuery = "SELECT run_time, time_total from stats";
// ################################ CHART 1 ################################
// This chart will use data pulled from our database
$zc = new ZC("myChart", "bar");
$zc->connect($servername, "3306", $username, $password, $db);
$data = $zc->query($myQuery, true);
$zc->closeConnection();
$zc->setSeriesData($data);
$zc->setSeriesText($zc->getFieldNames());
$zc->render();
?>
</BODY></HTML>
Error:
# php index.php
PHP Fatal error: Uncaught Error: Class 'ZingChart\PHPWrapper\mysqli' not found in /var/www/html/zc.php:69
Stack trace:
#0 /var/www/html/index.php(22): ZingChart\PHPWrapper\ZC->connect('localhost', '3306', 'nasuser', 'password', 'nas')
Line 69 is:
$this->mysqli = new mysqli($host, $username, $password, $dbName, $port);
Running:
# php -m | grep -i mysql
mysqli
mysqlnd
pdo_mysql
Seems to indicate that I have the relevant packages installed. Indeed, if I attempt a normal PHP connection it works:
<?php
$servername = "localhost";
$username = "nasuser";
$password = "password";
$dbname = "nas";
// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT * from stats limit 10;";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
// output data of each row
while($row = $result->fetch_assoc()) {
echo "id: " . $row["stat_id"]. "<br>";
}
} else {
echo "0 results";
}
$conn->close();
?>
Output:
# php db.php
id: 1<br>id: 2<br>id: 3<br>id: 4<br>id: 5<br>id: 6<br>id: 7<br>id: 8<br>id: 9<br>id: 10<br>
Any ideas?
Thanks.
I've encountered the same problem!
There's kind of problem with import on the mysqli.
I changed the
$this->mysqli = new mysqli($host, $username, $password, $dbName, $port);
from ZC.php to
$this->mysqli = new \mysqli($host,$username,$password, $dbName, $port);
and worked.

Setting up nginx with multiple IPs

I have my nginx configuration file under /etc/nginx/sites-available/ with two upstreams say
upstream test1 {
server 1.1.1.1:50;
server 1.1.1.2:50;
}
upstream test2 {
server 2.2.2.1:60;
server 2.2.2.2:60;
}
server {
location / {
proxy_pass http://test1;
}
location / {
proxy_pass http://test2;
}
}
Sending a curl request to <PrimaryIP>:80 works but I want to use <SecondaryIP1>:80 for test1 and <SecondaryIP2>:80 for test2. Is it possible to define this in nginx?
You have to have two server directives to accomplish this task:
upstream test1 {
server 1.1.1.1:50;
server 1.1.1.2:50;
}
upstream test2 {
server 2.2.2.1:60;
server 2.2.2.2:60;
}
server {
listen 80
server_name <SecondartIP1>
location / {
proxy_pass http://test1;
}
}
server {
listen 80
server_name <SecondarIP2>
location / {
proxy_pass http://test2;
}
}

Using multiple vhost templates in puppet

I'd like to use multiple vhost templates from my apache module in my nodes manifest, and so far not having any luck.
I have one vhost template in my apache module that looks like this. This is my apache::vhost template:
cat modules/apache/templates/vhost.conf.erb
<VirtualHost *:<%= port %>>
ServerName <%= name %>
<%if serveraliases.is_a? Array -%>
<% serveraliases.each do |name| -%>
<%= " ServerAlias #{name}\n" %><% end -%>
<% elsif serveraliases != '' -%>
<%= " ServerAlias #{serveraliases}" -%>
<% end -%>
php_value newrelic.appname <%= name %>
KeepAlive On
KeepAliveTimeout 5
MaxKeepAliveRequests 100
LogFormat "{ \
\"host\":\"<%= name %>.<%= domain %>\", \
\"path\":\"/var/log/httpd/jf_<%= name %>_access_log\", \
\"tags\":[\"Jokefire <%= name %>\"], \
\"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
\"timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
\"clientip\": \"%a\", \
\"duration\": %D, \
\"status\": %>s, \
\"request\": \"%U%q\", \
\"urlpath\": \"%U\", \
\"urlquery\": \"%q\", \
\"method\": \"%m\", \
\"bytes\": %B, \
\"vhost\": \"%v\" \
}" <%= name %>_access_json
CustomLog /var/log/httpd/jf_<%= name %>_access_log <%= name %>_access_json
LogLevel debug
ErrorLog /var/log/httpd/jf_<%= name %>_error_log
DirectoryIndex index.html index.php
DocumentRoot <%= docroot %>
<Directory <%= docroot %>>
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
allow from all
</Directory>
ServerSignature On
</VirtualHost>
And when I define that template in my nodes.pp manifest it worked totally fine:
apache::vhost { 'dev.example.com':
port => 80,
docroot => '/var/www/jf-wp',
ssl => false,
priority => 002,
}
But when I try to use another vhost template with different settings in my nodes.pp manifest I get an error. This is the apache::vhost_admin template that I can't get to work in my nodes.pp manifest:
#cat modules/apache/templates/vhost_admin.conf.erb
<VirtualHost *:<%= port %>>
ServerName <%= name %>
<%if serveraliases.is_a? Array -%>
<% serveraliases.each do |name| -%>
<%= " ServerAlias #{name}\n" %><% end -%>
<% elsif serveraliases != '' -%>
<%= " ServerAlias #{serveraliases}" -%>
<% end -%>
php_value newrelic.enabled false
KeepAlive On
KeepAliveTimeout 5
MaxKeepAliveRequests 100
LogFormat "{ \
\"host\":\"<%= name %>.<%= domain %>\", \
\"path\":\"/var/log/httpd/jf_<%= name %>_access_log\", \
\"tags\":[\"Jokefire <%= name %>\"], \
\"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
\"timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
\"clientip\": \"%a\", \
\"duration\": %D, \
\"status\": %>s, \
\"request\": \"%U%q\", \
\"urlpath\": \"%U\", \
\"urlquery\": \"%q\", \
\"method\": \"%m\", \
\"bytes\": %B, \
\"vhost\": \"%v\" \
}" <%= name %>_access_json
CustomLog /var/log/httpd/jf_<%= name %>_access_log <%= name %>_access_json
LogLevel debug
ErrorLog /var/log/httpd/jf_<%= name %>_error_log
DirectoryIndex index.html index.php
DocumentRoot <%= docroot %>
<Directory <%= docroot %>>
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
allow from all
</Directory>
ServerSignature On
</VirtualHost>
And when I try to define apache::vhost_admin in my nodes.pp file:
apache::vhost_admin { 'admin.example.com':
port => 80,
docroot => '/var/www/admin',
ssl => false,
priority => 004,
serveraliases => 'www.admin.example.com',
}
When I define the apache::vhost_admin template in the nodes.pp manifest is when I get the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with e
rror ArgumentError: Invalid resource type apache::vhost_admin at /etc/puppet/environments/production/manifests/nodes.p
p:139 on node web1.jokefire.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
enter code here
What am I doing wrong? How can I define multiple vhost definitions in puppet, each with different settings?
After the discussion with #bluethundr, it looks like the "apache::vhost_admin" define was missing.

Resources