We are looking to setup Build and Deployment pipeline for SAP Hybris B2c . We are able to build and deploy but we are not able to test the URL -> https://localhost:9002/yacceleratorstorefront .
Is there any other sample development code that we could use and test the deployment.
You could also use site parameter approach.
http://localhost:9001/yacceleratorstorefront?site=apparel-uk&clear=true
http://localhost:9001/yacceleratorstorefront?site=apparel-de&clear=true
http://localhost:9001/yacceleratorstorefront?site=electronics&clear=true
http://localhost:9001/yacceleratorstorefront?site=powertools&clear=true
Execute the following command to install and initialize the b2c accelerator
In windows:
Your_hybris_installation_directory\installer\>install.bat -r b2c_acc_plus -A local_property:initialpassword.admin=nimda && install.bat -r b2c_acc_plus initialize -A local_property:initialpassword.admin=nimda
In Linux:
Your_hybris_installation_directory/installer]./install.sh -r b2c_acc_plus -A local_property:initialpassword.admin=nimda && ./install.sh -r b2c_acc_plus initialize -A local_property:initialpassword.admin=nimda
Make the following entry in your hosts file:
127.0.0.1 apparel-uk.local apparel-de.local electronics.local
Start the hybris server in any of the two ways:
In windows:
Your_hybris_installation_directory\installer\>install.bat -r b2c_acc_plus start
or
Your_hybris_installation_directory\hybris\bin\platform\>hybrisserver.bat
In Linux:
Your_hybris_installation_directory/installer]./install.sh -r b2c_acc_plus start
or
Your_hybris_installation_directory/hybris/bin/platform]./hybrisserver.sh
Once your hybris server is running, you can access any of the following URLs:
http://electronics.local:9001/yacceleratorstorefront/
http://apparel-uk.local:9001/yacceleratorstorefront/
http://apparel-de.local:9001/yacceleratorstorefront/
Related
I hope you're doing well,
I'm traying to automate a Jenkins process using a bash script in Linux, in which I need to create a build, then with created build I need to create a build using the option "Build with parameters" and use a specific build to create it.
First I'm creating the build with a similar command (it is corking fine it created the build successfully):
ssh -l MyUser -p JENK_PORT JENK_SERver build job-build -s –v
it creates the build number 10 then I need to use this build to create another one for the job job-deploy, something like:
ssh -l MyUser -p JENK_PORT JENK_SERver build job-deploy -p COPY_PROMOTION_LEVEL=1 -p BUILD_SELECTOR="\<SpecificBuildSelector plugin=\"copyartifact#1.37\"\> \<buildNumber\>10 \</buildNumber\>\</SpecificBuildSelector\>" -s -v
when I ran it, I'm getting this error:
ERROR: Too many arguments: plugin=copyartifact#1.37>
If I change the "space" for  /  or adding a back slash between SpecificBuildSelector and plugin=copyartifact#1.37, I got this error:
ERROR: Unexpected exception occurred while performing build command.
com.thoughtworks.xstream.io.StreamException: : only whitespace
content allowed before start tag and not \ (position: START_DOCUMENT
seen ... #1:1)
Do you know how can I do it??, create the build for an specific build in the command line by passing the build parameters with -p option?
Thanks in advance.
I use the mcr.microsoft.com/mssql/server:2017 docker container to run a mssql server. I tried to change the collation like this:
echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Unfortunately I get this error:
No passwd entry for user 'mssql'
How is it possible to fix this error?
I created a new user with useradd mssql, but now I get this error if I run the command:
sqlservr: Unable to open /var/opt/mssql/.system/instance_id: File: pal.cpp:566 [Status: 0xC0000022 Access Denied errno = 0xD(13) Permission denied]
/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 101
It looks the latest mcr.microsoft.com/mssql/server fix such issue, if you insist on the old, next could be the procedure to fix all user/permission issue:
cake#cake:~/20211012$ docker run --rm -it mcr.microsoft.com/mssql/server:2017-latest /bin/bash
SQL Server 2019 will run as non-root by default.
This container is running as user root.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
root#4fd0bdf1d21c:/# useradd mssql
root#4fd0bdf1d21c:/# mkdir -p /var/opt/mssql
root#4fd0bdf1d21c:/# chmod -R 777 /var/opt/mssql
root#4fd0bdf1d21c:/# echo "SQL_Latin1_General_CP1_CI_AS" | /opt/mssql/bin/mssql-conf set-collation
Enter the collation: Configuring SQL Server...
The SQL Server End-User License Agreement (EULA) must be accepted before SQL
Server can start. The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746388.
You can accept the EULA by specifying the --accept-eula command line option,
setting the ACCEPT_EULA environment variable, or using the mssql-conf tool.
I run Appium node.js tests on AWS Device Farm. I would like to get granular Test results shown in Device Farm, but I always get one "Tests Suite" result which inlcudes all tests. So if one small test failes the whole Test Suite fails.
I read in the Device Farm Docs that in a Standard Environment more granular results will be displayed, but I am not sure how to switch or use standard environment. I asume it has something to do with the YAML File as the possibility to select between standard or custom environment is not longer given on the Device Farm UI.
This is my current YAML File:
version: 0.1
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# By default, Appium server version used is 1.7.2.
# You can switch to an alternate supported version from 1.6.5, 1.7.1, 1.7.2, 1.8.0 , 1.8.1, 1.9.1 by using a command like "avm 1.7.1"
# OR
# To install a newer version of Appium use the following commands:
- export APPIUM_VERSION=1.9.1
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# By default the node version installed is 11.4.0
# you can switch to an alternate node version using below command.
# - nvm install 10.13.0
# Unpackage and install the node modules that you uploaded in the test phase.
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp --device-name $DEVICEFARM_DEVICE_NAME
--platform-name $DEVICEFARM_DEVICE_PLATFORM_NAME --app $DEVICEFARM_APP_PATH
--automation-name UiAutomator2 --udid $DEVICEFARM_DEVICE_UDID
--chromedriver-executable $DEVICEFARM_CHROMEDRIVER_EXECUTABLE >> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that run your test suite execution.
test:
commands:
# Go into the root folder containing your source code and node_modules
- echo "Navigate to test source code"
# Change the directory to node_modules folder as it has your test code and the dependency node modules.
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
# Enter the command below to start the tests . The comamnd should be similar to what you use to run the tests locally.
# For e.g. assuming you run your tests locally using command "node YOUR_TEST_FILENAME.js.", enter the same command below:
- npm run test:android
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR```
AWS Device Farm's standard mode is independent of YAML file. It's a setting that is configured in the "Configure" step when you schedule a run through the console or via CLI through the ScheduleRun API. Currently, AWS Device Farm does not support Appium Node in standard mode, which means that the granular reporting you are seeking is not available.
If you have further questions, you can head over to the AWS Device Farm Forums for additional assistance from their engineering team.
Andy
I'm using Azure batch python API. When I'm creating a new job, I see exit code 128 (image attached). How can I know what is the reason for that?
I'm creating a new job using this code :
def wrap_commands_in_shell(commands):
return "/bin/bash -c 'set -e; set -o pipefail; {}; wait'".format(';'.join(commands))
job_tasks = ['cd /mnt/batch/tasks/shared/ && git clone https://github.com/cryptobiu/OSPSI.git',
'cd /mnt/batch/tasks/shared/OSPSI && git checkout cloud',
'cd /mnt/batch/tasks/shared/OSPSI && cmake CMake',
'cd /mnt/batch/tasks/shared/OSPSI && mkdir -p assets'
]
job_creation_information = batch.models.JobAddParameter(job_id, batch.models.PoolInformation(pool_id=pool_id),
job_preparation_task=batch.models.JobPreparationTask(
command_line=wrap_commands_in_shell(
job_tasks),
run_elevated=True,
wait_for_success=True
)
)
To diagnose, you can look at the stderr.txt and stdout.txt for the Job Preparation task that has failed in the Azure Portal, using Azure Batch Explorer, or using an SDK via code. If you look at which node ran the job prep task, navigate to that node, then the job directory. Under the job directory, you should see a jobpreparation directory. In that directory will have the stderr.txt and stdout.txt.
With regard to the exit code, there are a few potential problems that could cause this:
Did you install git, cmake and any other dependencies as part of a start task?
I get a 404 when I try to navigate to: https://github.com/cryptobiu/OSPSI. Does this repo exist? If it's a private repository, are you providing the correct credentials?
A few notes about your job_tasks array:
You should not hardcode the paths /mnt/batch/tasks/shared. This path to the "shared" directory may not be the same between Linux distributions. You should use the environment variable $AZ_BATCH_NODE_SHARED_DIR instead. You can view a full list of Azure Batch pre-filled environment variables here.
You do not need to cd into the directory for each command, you only need to do it once. You can rewrite job_tasks as:
['cd $AZ_BATCH_NODE_SHARED_DIR',
'TODO: INSERT YOUR COMMANDS TO SETUP AUTH WITH GITHUB FOR PRIVATE REPO',
'git clone https://github.com/cryptobiu/OSPSI.git',
'cd OSPSI',
'cmake CMake',
'mkdir -p assets']
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.