How to set gcAllowVeryLargeObjects to true for AzureWorker role? - azure

I\m unable to understand how to set gcAllowVeryLargeObjectsruntime param for worker role. I set this pram in app.config. But it doesnt work. As I understand I need to somehow config it in config file of wotker host.
Update: Final solution based on answer
ServiceDefinition.csdef
<WorkerRole name="XXX" vmsize="Standard_D4">
<Startup>
<Task commandLine="myStartup.cmd" executionContext="elevated" taskType="simple" >
<Environment>
<Variable name="yyy" value="yyy" />
</Environment>
</Task>
</Startup>
Next files should be added to worker project as content with Copy Local Always
startup.ps1
#Configure Server GC and Background GC settings
Function ChangeGCSettings
{
param ([string]$filename,[bool]$enableServerGC,[bool]$enableBackgroundGC,[bool]$enableVeryLargeGC)
$xml = New-Object XML
if (Test-Path ($filename))
{
$xml.Load($filename)
}
else
{
#config file doesn't exist create a now one
[System.Xml.XmlDeclaration]$xmlDeclaration = $xml.CreateXmlDeclaration("1.0","utf-8",$null)
$xml.AppendChild($xmlDeclaration)
}
# Check if we already have a configuration section - if not create it
$conf = $xml.configuration
if ($conf -eq $null)
{
$conf = $xml.CreateElement("configuration")
$xml.AppendChild($conf)
}
# Check if we already have a runtime section - if not create it
$runtime = $conf.runtime
if ($runtime -eq $null)
{
#runtime element doesn't exist
$runtime = $xml.CreateElement("runtime")
$conf.AppendChild($runtime)
}
# configure ServerGC
$gcserver = $runtime.gcServer
if ($gcserver -eq $null)
{
$gcserver = $xml.CreateElement("gcServer")
$gcserver.SetAttribute("enabled",$enableServerGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower())
$runtime.AppendChild($gcserver);
}
else
{
$gcserver.enabled=$enableServerGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower()
}
# configure gcAllowVeryLargeObjectsruntime
$gcAllowVeryLargeObjects = $runtime.gcAllowVeryLargeObjects
if ($gcAllowVeryLargeObjects -eq $null)
{
$gcAllowVeryLargeObjects = $xml.CreateElement("gcAllowVeryLargeObjects")
$gcAllowVeryLargeObjects.SetAttribute("enabled",$enableVeryLargeGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower())
$runtime.AppendChild($gcAllowVeryLargeObjects);
}
else
{
$gcAllowVeryLargeObjects.enabled=$enableVeryLargeGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower()
}
# configure Background GC
# .NET 4.5 is required for Bacground Server GC
# since 4.5 is an in-place upgrade for .NET 4.0 the new Background Server GC mode is available
# even if you target 4.0 in your application as long as the .NET 4.5 runtime is installed (Windows Server 2012 or higher by default)
#
# See http://blogs.msdn.com/b/dotnet/archive/2012/07/20/the-net-framework-4-5-includes-new-garbage-collector-enhancements-for-client-and-server-apps.aspx
$gcConcurrent = $runtime.gcConcurrent
if ($gcConcurrent -eq $null)
{
$gcConcurrent = $xml.CreateElement("gcConcurrent")
$gcConcurrent.SetAttribute("enabled",$enableBackgroundGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower())
$runtime.AppendChild($gcConcurrent);
}
else
{
$gcConcurrent.enabled=$enableBackgroundGC.ToString([System.Globalization.CultureInfo]::InvariantCulture).ToLower()
}
$xml.Save($filename)
}
# Enable Background Server GC
ChangeGCSettings "${env:RoleRoot}\base\x64\WaWorkerHost.exe.config" $true $true $true
myStartup.cmd
REM Attempt to set the execution policy by using PowerShell version 2.0 syntax.
REM PowerShell version 2.0 isn't available. Set the execution policy by using the PowerShell version 1.0 calling method.
PowerShell -Command "Set-ExecutionPolicy Unrestricted" >> "%TEMP%\StartupLog1.txt" 2>&1
PowerShell .\startup.ps1 >> "%TEMP%\StartupLog2.txt" 2>&1
REM If an error occurred, return the errorlevel.
EXIT /B %errorlevel%

Check out http://blogs.msdn.com/b/cie/archive/2013/11/14/enable-server-gc-mode-for-your-worker-role.aspx which uses a start script to enable Server GC in a worker role. You should be able to easily modify this to change the gcAllowVeryLargeObjectsruntime property.

Related

Issue installing Terratest using Go Task's Yaml Azure pipeline - issue triggering terratest tests in sub-folder

I'm facing this issue while installing terratest by azure yaml pipeline :
C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe install -v github.com/gruntwork-io/terratest#v0.40.6
go: downloading github.com/gruntwork-io/terratest v0.40.6
go install: github.com/gruntwork-io/terratest#v0.40.6: module github.com/gruntwork-io/terratest#v0.40.6 found, but does not contain package github.com/gruntwork-io/terratest
##[error]The Go task failed with an error: Error: The process 'C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe' failed with exit code 1
Finishing: Install Go Terratest module - v0.40.6
My code for installation is bellow :
- task: Go#0
displayName: Install Go Terratest module - v$(TERRATEST_VERSION)
inputs:
command: custom
customCommand: install
arguments: $(TF_LOG) github.com/gruntwork-io/terratest#v$(TERRATEST_VERSION)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
But peharps I made mistakes in the use of terratest.
Bellow is a screenshot of my code tree :
I have terraform code in (for exemple) Terraform\azure_v2_X\ResourceModules sub-directory, and terratest test in Terraform\azure_v2_X\Tests_Unit_ResourceModules subdirectories (in screenshot app_configuration tests for app_configuration resourceModules).
In my terratest module, I'm calling for my resourceModule as in the following code :
######test in a un isolated Resource Group defined in locals
module "app_configuration_tobetested" {
source = "../../ResourceModules/app_configuration"
resource_group_name = local.rg_name
location = local.location
environment = var.ENVIRONMENT
sku = "standard"
// rem : here app_service_shared prefix and app_config_shared prefix are the same !
app_service_prefix = module.app_configuration_list_fortests.settings.frontEnd_prefix
# stage = var.STAGE
app_config_list = module.app_configuration_list_fortests.settings.list_app_config
}
And in my Go file, I test my module result regarding the expected result I want :
package RM_app_configuration_Test
import (
"os"
"testing"
// "github.com/gruntwork-io/terratest/modules/azure"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
var (
globalBackendConf = make(map[string]interface{})
globalEnvVars = make(map[string]string)
)
func TestTerraform_RM_app_configuration(t *testing.T) {
t.Parallel()
// terraform Directory
fixtureFolder := "./"
// backend specification
strlocal := "RMapCfg_"
// input value
inputStage := "sbx_we"
inputEnvironment := "SBX"
inputApplication := "DEMO"
// expected value
expectedRsgName := "z-adf-ftnd-shrd-sbx-ew1-rgp01"
// expectedAppCfgPrefix := "z-adf-ftnd-shrd"
expectedAppConfigReader_ID := "[/subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-blue-sbx-cfg01 /subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-green-sbx-cfg01]"
// getting enVars from environment variables
/*
Go and Terraform uses two differents methods for Azure authentification.
** Terraform authentification is explained bellow :
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
** Go authentification is explained bellow
- https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
** Terratest is using both authentification methods regarding the work it has to be done :
- azure existences tests uses Go azure authentification :
- https://github.com/gruntwork-io/terratest/blob/master/modules/azure/authorizer.go#L11
- terraform commands uses terraform authentification :
- https://github.com/gruntwork-io/terratest/blob/0d654bd2ab781a52e495f61230cf892dfba9731b/modules/terraform/cmd.go#L12
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
so both authentification methods have to be implemented
*/
// getting terraform EnvVars from Azure Go environment variables
ARM_CLIENT_ID := os.Getenv("AZURE_CLIENT_ID")
ARM_CLIENT_SECRET := os.Getenv("AZURE_CLIENT_SECRET")
ARM_TENANT_ID := os.Getenv("AZURE_TENANT_ID")
ARM_SUBSCRIPTION_ID := os.Getenv("ARM_SUBSCRIPTION_ID")
if ARM_CLIENT_ID != "" {
globalEnvVars["ARM_CLIENT_ID"] = ARM_CLIENT_ID
globalEnvVars["ARM_CLIENT_SECRET"] = ARM_CLIENT_SECRET
globalEnvVars["ARM_SUBSCRIPTION_ID"] = ARM_SUBSCRIPTION_ID
globalEnvVars["ARM_TENANT_ID"] = ARM_TENANT_ID
}
// getting terraform backend from environment variables
resource_group_name := os.Getenv("resource_group_name")
storage_account_name := os.Getenv("storage_account_name")
container_name := os.Getenv("container_name")
key := strlocal + os.Getenv("key")
if resource_group_name != "" {
globalBackendConf["resource_group_name"] = resource_group_name
globalBackendConf["storage_account_name"] = storage_account_name
globalBackendConf["container_name"] = container_name
globalBackendConf["key"] = key
}
// User Terratest to deploy the infrastructure
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
// website::tag::1::Set the path to the Terraform code that will be tested.
// The path to where our Terraform code is located
TerraformDir: fixtureFolder,
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"STAGE": inputStage,
"ENVIRONMENT": inputEnvironment,
"APPLICATION": inputApplication,
},
EnvVars: globalEnvVars,
// backend values to set when initialziing Terraform
BackendConfig: globalBackendConf,
// Disable colors in Terraform commands so its easier to parse stdout/stderr
NoColor: true,
})
// website::tag::4::Clean up resources with "terraform destroy". Using "defer" runs the command at the end of the test, whether the test succeeds or fails.
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// website::tag::2::Run "terraform init" and "terraform apply".
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// tests the resource_group for the app_configuration
/*
actualAppConfigReaderPrefix := terraform.Output(t, terraformOptions, "app_configuration_tested_prefix")
assert.Equal(t, expectedAppCfgprefix, actualAppConfigReaderPrefix)
*/
actualRSGReaderName := terraform.Output(t, terraformOptions, "app_configuration_tested_RG_name")
assert.Equal(t, expectedRsgName, actualRSGReaderName)
actualAppConfigReader_ID := terraform.Output(t, terraformOptions, "app_configuration_tobetested_id")
assert.Equal(t, expectedAppConfigReader_ID, actualAppConfigReader_ID)
}
The fact is locally, I can do, from My main folder Terraform\Azure_v2_X\Tests_Unit_ResourceModules the following command to trigger all my tests in a raw :
(from Go v1.11)
Go test ./...
With Go version 1.12, I could set GO111MODULE=auto to have the same results.
But with Go 1.17, I have now to set GO111MODULE=off to trigger my tetst.
For now, I have 2 main questions that nagging me :
How can I Go import Terratest (and other) modules from azure Pipeline ?
What I have to do to correctly use Go Modules with terratest ?
I have no Go code in my main folder _Terraform\Azure_v2_X\Tests_Unit_ResourceModules_ and would like to trigger all the sub_folder go tests in a simple command line in my Azure Pipeline.
Thank you for any help you could give.
Best regards,
I will once again answer my own question. :D
so, for now, using the following versions :
-- GOVERSION: 1.17.1
-- TERRAFORM_VERSION : 1.1.7
-- TERRATEST_VERSION: 0.40.6
The folder hierarchy has changed the following, regarding terratest tests :
I do no longer try to Go import my Terratest module. (so point 1) above is ansered, obviously)
I now just have to :
Go mod each of my terratest modules
Trigger each of them individually one by one, using script
so my pipeline just became the following :
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform $(TERRAFORM_VERSION)
inputs:
terraformVersion: $(TERRAFORM_VERSION)
- task: GoTool#0
displayName: 'Use Go $(GOVERSION)'
inputs:
version: $(GOVERSION)
goPath: $(GOPATH)
goBin: $(GOBIN)
- task: PowerShell#2
displayName: run Terratest for $(pathToTerraformRootModule)
inputs:
targettype : 'filePath'
filePath: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)/$(Run_Terratest_script)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
env:
# see https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
# for Azure authentification with Go
ARM_SUBSCRIPTION_ID: $(TF_VAR_ARM_SUBSCRIPTION_ID)
AZURE_CLIENT_ID: $(TF_VAR_ARM_CLIENT_ID)
AZURE_TENANT_ID: $(TF_VAR_ARM_TENANT_ID)
AZURE_CLIENT_SECRET: $(TF_VAR_ARM_CLIENT_SECRET) # set as pipeline secret
resource_group_name: $(storageAccountResourceGroup)
storage_account_name: $(storageAccount)
container_name: $(stateBlobContainer)
key: '$(MODULE)-$(TF_VAR_APPLICATION)-$(TF_VAR_ENVIRONMENT).tfstate'
GO111MODULE: 'auto'
And in my main folder for my terratest sub-folders, I have the run_terratests.ps1 script and the Terratests list file as bellow :
run_terratests.ps1
# this file is based on https://github.com/google/go-cloud/blob/master/internal/testing/runchecks.sh
#
# This script runs all go Terratest suites,
# compatibility checks, consistency checks, Wire, etc.
$moduleListFile = "./Terratests"
# regex to filter : not began with #
$regexFilter = "^[^#]"
# read the ModuleListFile
[object] $arrayFromFile = Get-Content -Path $moduleListFile | Where-Object { $_ -match $regexFilter} | ConvertFrom-String -PropertyNames folder, totest
$result = 0 # set no error by default
# get the actual folder
$main_path = Get-Location | select -ExpandProperty "Path"
#read the array to show if to be tested !
foreach ($line in $arrayFromFile) {
# write-Host $line
if ($line.totest -eq "yes") {
$path = $line.folder
set-location $main_path\$path
$myPath = Get-Location
# Write-Host $myPath
# trigger terratest for files
Go test ./...
}
if ($false -eq $?)
{
$result = 1
}
}
# back to school :D
set-location $main_path
if ($result -eq 1)
{
Write-Error "Msbuild exit code indicate test failure."
Write-Host "##vso[task.logissue type=error]Msbuild exit code indicate test failure."
exit(1)
}
the code
if ($false -eq $?)
{
$result = 1
}
is usefull to make the pipeline fail on test error without escaping the other tests.
Terratests
# this file lists all the modules to be tested in the "Tests_Unit_ConfigHelpers" repository.
# it us used by the "run_terratest.ps1" powershell script to trigger terratest for each test.
#
# Any line that doesn't begin with a '#' character and isn't empty is treated
# as a path relative to the top of the repository that has a module in it.
# The 'tobetested' field specifies whether this is a module that have to be tested.
#
# this file is based on https://github.com/google/go-cloud/blob/master/allmodules
# module-directory tobetested
azure_constants yes
configure_app_srv_etc yes
configure_frontdoor_etc yes
configure_hostnames yes
constants yes
FrontEnd_AppService_slots/_main yes
FrontEnd_AppService_slots/settings yes
merge_maps_of_strings yes
name yes
name_template yes
network/hostname_generator yes
network/hostnames_generator yes
replace_2vars_into_string_etc yes
replace_var_into_string_etc yes
sorting_map_with_an_other_map yes
And the change in each terratest folder is that I will add the go.mod and go.sum files :
$ go mod init mytest
go: creating new go.mod: module mytest
go: to add module requirements and sums:
go mod tidy
and
$ go mod tidy
# link each of the go modules needed for your terratest module
so, with that, the go test ./... from the powershell script will downlaod the needed go modules and run the test for that particulary test.
Thanks for reading and vote if you think that can help :)

Failed to upload Azure Service Fabric package to Azure

I cannot upload an ASF package to a remote Azure Image Store. The process fails using the following script at 53.8% with 0/6 replicated files (0/0% complete).
I cannot also publish to a remote cluster using Visual Studio. As I get the following error:
WARNING: Failed to contact Naming Service. Attempting to contact Failover
Are there any alternative ways of pushing up an application? The Azure Service Fabric on Azure experience just isn't working for me and I've wasted days on it.
Thanks in advance!
Powershell deployment script below:
# Variables
$endpoint = 'XXX-asf.australiaeast.cloudapp.azure.com:19000'
$privateThumbprint = 'XXX'
$publicThumbprint = 'XX'
$projectPath = "C:\dev\asf-network-hops\AsfNetworkHops\AsfNetworkHops\AsfNetworkHops.sfproj"
$projectConfiguration = "Debug"
$packagePath = Resolve-Path "$projectPath\..\pkg\$projectConfiguration"
$applicationTypeName = "AsfNetworkHopsType"
$applicationName = "fabric:/AsfNetworkHops"
$imageStoreConnectionString = "fabric:ImageStore"
# Connect
# $conn = Connect-ServiceFabricCluster -ConnectionEndpoint $endpoint `
# -KeepAliveIntervalInSec 10 `
# -X509Credential -ServerCertThumbprint $privateThumbprint `
# -FindType FindByThumbprint -FindValue $publicThumbprint `
# -StoreLocation CurrentUser -StoreName My
$imageStoreConnectionString = ""
$conn = Connect-ServiceFabricCluster -ConnectionEndpoint XXX-asf.australiaeast.cloudapp.azure.com:19000 `
-KeepAliveIntervalInSec 10 `
-X509Credential -ServerCertThumbprint XXX`
-FindType FindByThumbprint -FindValue XXX`
-StoreLocation CurrentUser -StoreName My
if (-not $conn)
{
$conn
throw "Connection error!"
}
# Clean, Compile, & Package
dotnet clean $projectPath /p:Configuration=$projectConfiguration /nologo
dotnet msbuild $projectPath /t:Package /nologo /p:Platform="x64" /p:Configuration=$projectConfiguration /nologo /p:WarningLevel=1
Write-Host "Uploading package $packagePath"
Copy-ServiceFabricApplicationPackage $packagePath -ImageStoreConnectionString $imageStoreConnectionString -ApplicationPackagePathInImageStore $applicationTypeName -ShowProgress -ShowProgressIntervalMilliseconds 500 -TimeoutSec 1000000 -CompressPackage

Zingchart PHP wrapper issue

I am attempting to set up Zingchart using the PHP wrapper module.
Server:
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
Running as a VM under VMWare
I am receiving an error with the example code when calling the ZC->connect method.
Thus:
<HTML>
<HEAD>
<script src="zingchart.min.js"></script>
</HEAD>
<BODY>
<h3>Simple Bar Chart (Database)</h3>
<div id="myChart"></div>
<?php
include 'zc.php';
use ZingChart\PHPWrapper\ZC;
$servername = "localhost";
$username = "nasuser";
$password = "password";
$db = "nas";
$myQuery = "SELECT run_time, time_total from stats";
// ################################ CHART 1 ################################
// This chart will use data pulled from our database
$zc = new ZC("myChart", "bar");
$zc->connect($servername, "3306", $username, $password, $db);
$data = $zc->query($myQuery, true);
$zc->closeConnection();
$zc->setSeriesData($data);
$zc->setSeriesText($zc->getFieldNames());
$zc->render();
?>
</BODY></HTML>
Error:
# php index.php
PHP Fatal error: Uncaught Error: Class 'ZingChart\PHPWrapper\mysqli' not found in /var/www/html/zc.php:69
Stack trace:
#0 /var/www/html/index.php(22): ZingChart\PHPWrapper\ZC->connect('localhost', '3306', 'nasuser', 'password', 'nas')
Line 69 is:
$this->mysqli = new mysqli($host, $username, $password, $dbName, $port);
Running:
# php -m | grep -i mysql
mysqli
mysqlnd
pdo_mysql
Seems to indicate that I have the relevant packages installed. Indeed, if I attempt a normal PHP connection it works:
<?php
$servername = "localhost";
$username = "nasuser";
$password = "password";
$dbname = "nas";
// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT * from stats limit 10;";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
// output data of each row
while($row = $result->fetch_assoc()) {
echo "id: " . $row["stat_id"]. "<br>";
}
} else {
echo "0 results";
}
$conn->close();
?>
Output:
# php db.php
id: 1<br>id: 2<br>id: 3<br>id: 4<br>id: 5<br>id: 6<br>id: 7<br>id: 8<br>id: 9<br>id: 10<br>
Any ideas?
Thanks.
I've encountered the same problem!
There's kind of problem with import on the mysqli.
I changed the
$this->mysqli = new mysqli($host, $username, $password, $dbName, $port);
from ZC.php to
$this->mysqli = new \mysqli($host,$username,$password, $dbName, $port);
and worked.

Zip file is not extracting by crontab

i am using a perl script for unzipping a zip file by crontab. The script is working properly if i execute it manually. but whenever i set it in cron the script does not work any more. i have tested cron setting other script files are working from cron only this zip extracting script is not working .
Script is as follows :
#!/usr/bin/perl
use IO::Uncompress::Unzip qw(unzip $UnzipError);
$dir = '/root/perl';
open (han2, "ls -l $dir/*.zip |awk '{print \$9}'|");
#array1 = <han2>;
chomp(#array1);
for ($i=0;$i<=$#array1;$i++) {
$zipfile = $array1[$i];
$u = new IO::Uncompress::Unzip $zipfile
or die "Cannot open $zipfile: $UnzipError";
die "Zipfile has no members"
if ! defined $u->getHeaderInfo;
for ( $status = 1; $status > 0; $status = $u->nextStream) {
$name = $u->getHeaderInfo->{Name};
warn "Processing member $name\n" ;
if ($name =~ /\/$/) {
mkdir $name;
}
else {
unzip $zipfile => $name, Name => $name
or die "unzip failed: $UnzipError\n";
}
}
}
Crontab setting :
34 14 * * * /root/perl/./unzip.pl > /dev/null 2>&1
Please help me to do this task by cronjob
When cron executes your script, the current directoy probably won't /root/perl. Try chdir($dir) after you set $dir, or use full pathnames where required:
$u = new IO::Uncompress::Unzip "$dir/$zipfile"
or die "Cannot open $zipfile: $UnzipError";
mkdir "$dir/$name";
unzip "$dir/$zipfile" => "$dir/$name" ...
Changing to the correct directory is probably easier.

using external redis server for testing tcl scripts

I am running Ubuntu 11.10.
i am trying to run TCL test scripts using external redis server.
using the following :
sb#sb-laptop:~/Redis/redis$ tclsh tests/test_helper.tcl --host 192.168.1.130 --port 6379
Getting the following error :
Testing unit/type/list
[exception]: Executing test client: couldn't open socket: connection refused.
couldn't open socket: connection refused
while executing
"socket $server $port"
(procedure "redis" line 2)
invoked from within
"redis $::host $::port"
(procedure "start_server" line 9)
invoked from within
"start_server {tags {"protocol"}} {
test "Handle an empty query" {
reconnect
r write "\r\n"
r flush
assert_equal "P..."
(file "tests/unit/protocol.tcl" line 1)
invoked from within
"source $path"
(procedure "execute_tests" line 4)
invoked from within
"execute_tests $data"
(procedure "test_client_main" line 9)
invoked from within
"test_client_main $::test_server_port "
the redis.conf is set to default binding, but it is commented out.
If this is possible, what i am doing wrong?
Additional Information:
Below is the tcl code that is responsible for starting the server
proc start_server {options {code undefined}} {
# If we are runnign against an external server, we just push the
# host/port pair in the stack the first time
if {$::external} {
if {[llength $::servers] == 0} {
set srv {}
dict set srv "host" $::host
dict set srv "port" $::port
set client [redis $::host $::port]
dict set srv "client" $client
$client select 9
# append the server to the stack
lappend ::servers $srv
}
uplevel 1 $code
return
}
# setup defaults
set baseconfig "default.conf"
set overrides {}
set tags {}
# parse options
foreach {option value} $options {
switch $option {
"config" {
set baseconfig $value }
"overrides" {
set overrides $value }
"tags" {
set tags $value
set ::tags [concat $::tags $value] }
default {
error "Unknown option $option" }
}
}
set data [split [exec cat "tests/assets/$baseconfig"] "\n"]
set config {}
foreach line $data {
if {[string length $line] > 0 && [string index $line 0] ne "#"} {
set elements [split $line " "]
set directive [lrange $elements 0 0]
set arguments [lrange $elements 1 end]
dict set config $directive $arguments
}
}
# use a different directory every time a server is started
dict set config dir [tmpdir server]
# start every server on a different port
set ::port [find_available_port [expr {$::port+1}]]
dict set config port $::port
# apply overrides from global space and arguments
foreach {directive arguments} [concat $::global_overrides $overrides] {
dict set config $directive $arguments
}
# write new configuration to temporary file
set config_file [tmpfile redis.conf]
set fp [open $config_file w+]
foreach directive [dict keys $config] {
puts -nonewline $fp "$directive "
puts $fp [dict get $config $directive]
}
close $fp
set stdout [format "%s/%s" [dict get $config "dir"] "stdout"]
set stderr [format "%s/%s" [dict get $config "dir"] "stderr"]
if {$::valgrind} {
exec valgrind --suppressions=src/valgrind.sup src/redis-server $config_file > $stdout 2> $stderr &
} else {
exec src/redis-server $config_file > $stdout 2> $stderr &
}
# check that the server actually started
# ugly but tries to be as fast as possible...
set retrynum 100
set serverisup 0
if {$::verbose} {
puts -nonewline "=== ($tags) Starting server ${::host}:${::port} "
}
after 10
if {$code ne "undefined"} {
while {[incr retrynum -1]} {
catch {
if {[ping_server $::host $::port]} {
set serverisup 1
}
}
if {$serverisup} break
after 50
}
} else {
set serverisup 1
}
if {$::verbose} {
puts ""
}
if {!$serverisup} {
error_and_quit $config_file [exec cat $stderr]
}
# find out the pid
while {![info exists pid]} {
regexp {\[(\d+)\]} [exec cat $stdout] _ pid
after 100
}
# setup properties to be able to initialize a client object
set host $::host
set port $::port
if {[dict exists $config bind]} { set host [dict get $config bind] }
if {[dict exists $config port]} { set port [dict get $config port] }
# setup config dict
dict set srv "config_file" $config_file
dict set srv "config" $config
dict set srv "pid" $pid
dict set srv "host" $host
dict set srv "port" $port
dict set srv "stdout" $stdout
dict set srv "stderr" $stderr
# if a block of code is supplied, we wait for the server to become
# available, create a client object and kill the server afterwards
if {$code ne "undefined"} {
set line [exec head -n1 $stdout]
if {[string match {*already in use*} $line]} {
error_and_quit $config_file $line
}
while 1 {
# check that the server actually started and is ready for connections
if {[exec cat $stdout | grep "ready to accept" | wc -l] > 0} {
break
}
after 10
}
# append the server to the stack
lappend ::servers $srv
# connect client (after server dict is put on the stack)
reconnect
# execute provided block
set num_tests $::num_tests
if {[catch { uplevel 1 $code } error]} {
set backtrace $::errorInfo
# Kill the server without checking for leaks
dict set srv "skipleaks" 1
kill_server $srv
# Print warnings from log
puts [format "\nLogged warnings (pid %d):" [dict get $srv "pid"]]
set warnings [warnings_from_file [dict get $srv "stdout"]]
if {[string length $warnings] > 0} {
puts "$warnings"
} else {
puts "(none)"
}
puts ""
error $error $backtrace
}
# Don't do the leak check when no tests were run
if {$num_tests == $::num_tests} {
dict set srv "skipleaks" 1
}
# pop the server object
set ::servers [lrange $::servers 0 end-1]
set ::tags [lrange $::tags 0 end-[llength $tags]]
kill_server $srv
} else {
set ::tags [lrange $::tags 0 end-[llength $tags]]
set _ $srv
}
}
Either there's nothing listening on host 192.168.1.130, port 6379 (well, at a guess) or your firewall configuration is blocking the connection. Impossible to say which, since all the code is really seeing is “the connection didn't work; something said ‘no’…”.

Resources