Installing reaction-platform on windows, getting error like `Parameter format not correct` - node.js

I'm trying to install reaction-platform on my windows system. I have cofirmed the dependancies mentioned, everything seems fine.
I'm fllowing the official documentation.
https://docs.reactioncommerce.com/docs/installation-reaction-platform
When I run make command getting issue like:
28ec31dc0e9cba366bdbb724ce1f9733b327aae90801f371c234437748d7d688
abc5351b5c17853153771d8a9e868d4f443106bb908d23f6672531cd6105364c
dc796c6d02a715ccd9e1133157cb770e5399ed66184202186c1151d520e1d03b
Running pre-build hook script for reaction-hydra. reaction-hydra
post-project-start script invoked. FIND: Parameter format not correct
make: *** [prebuild-reaction-hydra] Error 2
Here is the makefile:
#gnu makefile
# This Makefile provides macro control of the Reaction Platform microservice
# ecosystem. It performs tasks like:
#
# * Verify dependencies are present
# * Clone git projects, checkout a particular reference
# * Preconfiguration and subproject bootstrapping
# * Launching subprojects
# * Teardown tasks with varying destructiveness
#
#
# Exit codes:
#
# All failures should exit with a detailed code that can be used for
# troubleshooting. The current exit codes are:
#
# 0: Success!
# 101: Github is not configured correctly.
# 102: Required dependency is not installed.
#
###############################################################################
### Configuration
### Load configuration from external files. Configuration variables defined in
### later files have precedent and will overwrite those defined in previous
### files. The -include directive ensures that no error is thrown if a file is
### not found, which is the case if config.local.mk does not exist.
###############################################################################
-include config.mk config.local.mk
SUBPROJECTS=$(foreach rr,$(SUBPROJECT_REPOS),$(shell echo $(rr) | cut -d , -f 2))
###############################################################################
### Tasks
###############################################################################
all: init
###############################################################################
### Init-Project
### Initializes a project. Does not do common tasks shared between projects.
###############################################################################
define init-template
init-$(1): $(1) network-create prebuild-$(1) build-$(1) post-build-$(1) start-$(1) post-project-start-$(1)
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call init-template,$(p))))
###############################################################################
### Init Project with System
### Init project and run the post-system hook script.
### Assumes dependencies are already started.
###############################################################################
define init-with-system-template
init-with-system-$(1): init-$(1) post-system-start-$(1)
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call init-with-system-template,$(p))))
.PHONY: init
init: $(foreach p,$(SUBPROJECTS),init-$(p)) post-system-start
###############################################################################
### Targets to verify Github is configured correctly.
###############################################################################
github-configured: dependencies
#(ssh -T git#github.com 2>&1 \
| grep "successfully authenticated" >/dev/null \
&& echo "Github login verified.") \
|| (echo "You need to configure an ssh key with access to github" \
&& echo "See https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/ for instructions" \
&& exit 101)
###############################################################################
### Verify prerequisite software is installed.
###############################################################################
is-not-installed=! (command -v $(1) >/dev/null)
define dependency-template
dependency-$(1):
#if ( $(call is-not-installed,$(1)) ); \
then \
echo "Dependency" $(1) " not found in path." \
&& exit 102; \
else \
echo "Dependency" $(1) "found."; \
fi;
endef
$(foreach pkg,$(REQUIRED_SOFTWARE),$(eval $(call dependency-template,$(pkg))))
.PHONY: dependencies
dependencies: $(foreach pkg,$(REQUIRED_SOFTWARE),dependency-$(pkg))
###############################################################################
### Create Docker Networks
### Create all networks defined in the DOCKER_NETWORKS variable.
### Networks provide a way to loosely couple the projects and allow them to
### communicate with each other. We'll use dependencies on external networks
### rather than dependencies on other projects. Networks are lightweight and
### easy to create.
###############################################################################
define network-create-template
network-create-$(1):
#docker network create "$(1)" || true
endef
$(foreach p,$(DOCKER_NETWORKS),$(eval $(call network-create-template,$(p))))
.PHONY: network-create
network-create: $(foreach p,$(DOCKER_NETWORKS),network-create-$(p))
###############################################################################
### Remove Docker Networks
### Remove all networks defined in the DOCKER_NETWORKS variable.
###############################################################################
define network-remove-template
network-remove-$(1):
#docker network rm "$(1)" || true
endef
$(foreach p,$(DOCKER_NETWORKS),$(eval $(call network-remove-template,$(p))))
.PHONY: network-remove
network-remove: $(foreach p,$(DOCKER_NETWORKS),network-remove-$(p))
###############################################################################
### Git cloning
###############################################################################
define git-clone-template
$(2):
if [ ! -d "$(2)" ] ; then \
git clone "$(1)" "$(2)"; \
cd $(2) && git checkout "$(3)"; \
fi
endef
$(foreach rr,$(SUBPROJECT_REPOS),$(eval $(call git-clone-template,$(shell echo $(rr) | cut -d , -f 1),$(shell echo $(rr) | cut -d , -f 2),$(shell echo $(rr) | cut -d , -f 3))))
.PHONY: clone
clone: github-configured $(foreach p,$(SUBPROJECTS),$(p))
###############################################################################
### Git checkout
### Checkout the branch configured in the platform settings.
### Does not gracefully deal with conflicts or other problems.
###############################################################################
define git-checkout-template
checkout-$(2): $(2)
cd $(2) && git checkout "$(3)"
endef
$(foreach rr,$(SUBPROJECT_REPOS),$(eval $(call git-checkout-template,$(shell echo $(rr) | cut -d , -f 1),$(shell echo $(rr) | cut -d , -f 2),$(shell echo $(rr) | cut -d , -f 3))))
.PHONY: checkout
checkout: clone $(foreach p,$(SUBPROJECTS),checkout-$(p))
###############################################################################
### Pre Build Hook
### Invokes the pre-build hook in the child project directory if it exists.
### Invoked before the Docker Compose build.
###############################################################################
define prebuild-template
prebuild-$(1): $(1)
#if [ -e "$(1)/$(HOOK_DIR)/pre-build" ]; then \
echo "Running pre-build hook script for $(1)." \
&& "$(1)/$(HOOK_DIR)/pre-build"; \
else \
echo "No pre-build hook script for $(1). Skipping."; \
fi;
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call prebuild-template,$(p))))
.PHONY: prebuild
prebuild: $(foreach p,$(SUBPROJECTS),prebuild-$(p))
###############################################################################
### Docker Build
### Performs `docker-compose build --no-cache --pull`
### This is a very conservative build strategy to avoid cache related build
### issues.
###############################################################################
define build-template
build-$(1): prebuild-$(1)
#cd $(1) \
&& docker-compose build --no-cache --pull
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call build-template,$(p))))
.PHONY: build
build: $(foreach p,$(SUBPROJECTS),build-$(p))
###############################################################################
### Post Build Hook
### Invokes the post-build hook in the child project if existent.
### Invoke after all services in a project have been built.
###############################################################################
define post-build-template
post-build-$(1): build-$(1)
#if [ -e "$(1)/$(HOOK_DIR)/post-build" ]; then \
echo "Running post-build hook script for $(1)." \
&& "$(1)/$(HOOK_DIR)/post-build"; \
else \
echo "No post-build hook script for $(1). Skipping."; \
fi;
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call post-build-template,$(p))))
.PHONY: post-build
post-build: $(foreach p,$(SUBPROJECTS),post-build-$(p))
###############################################################################
### Start
### Starts services with `docker-compose up -d`
###############################################################################
define start-template
start-$(1):
#cd $(1) \
&& docker-compose up -d
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call start-template,$(p))))
.PHONY: start
start: $(foreach p,$(SUBPROJECTS),start-$(p))
###############################################################################
### Post Project Start Hook
### Invokes the post-project-start hook in the child project if existent.
### Invoked after all services in a project have been started.
###############################################################################
define post-project-start-template
post-project-start-$(1):
#if [ -e "$(1)/$(HOOK_DIR)/post-project-start" ]; then \
echo "Running post-project-start hook script for $(1)." \
&& "$(1)/$(HOOK_DIR)/post-project-start"; \
else \
echo "No post-project-start hook script for $(1). Skipping."; \
fi;
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call post-project-start-template,$(p))))
###############################################################################
### Post System Start Hook
### Invokes the post-system-start hook in the child projects if existent.
### Invoked after all services in the system have been started.
###
### Note: The final echo is required otherwise output of post-system-hook is
### not output.
###############################################################################
define post-system-start-template
post-system-start-$(1):
#if [ -e "$(1)/$(HOOK_DIR)/post-system-start" ]; then \
echo "Running post-system-start hook script for $(1)." \
&& "$(1)/$(HOOK_DIR)/post-system-start" \
&& echo ""; \
else \
echo "No post-system-start hook script for $(1). Skipping."; \
fi;
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call post-system-start-template,$(p))))
.PHONY: post-system-start
post-system-start: $(foreach p,$(SUBPROJECTS),post-system-start-$(p))
###############################################################################
### Stop
### Stops services with `docker-compose stop`
###############################################################################
define stop-template
stop-$(1):
#cd $(1) \
&& docker-compose stop
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call stop-template,$(p))))
.PHONY: stop
stop: $(foreach p,$(SUBPROJECTS),stop-$(p))
###############################################################################
### rm
### Remove containers with `docker-compose rm`
### Does not remove volumes.
###############################################################################
define rm-template
rm-$(1):
#cd $(1) \
&& docker-compose rm --stop --force
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call rm-template,$(p))))
.PHONY: rm
rm: $(foreach p,$(SUBPROJECTS),rm-$(p))
###############################################################################
### Clean
### Clean services with `docker-compose rm`
### Removes all containers, volumes and local networks.
###############################################################################
define clean-template
clean-$(1):
#cd $(1) \
&& docker-compose down -v --rmi local --remove-orphans
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call clean-template,$(p))))
.PHONY: clean
clean: $(foreach p,$(SUBPROJECTS),clean-$(p)) network-remove
###############################################################################
### Destroy
### Deletes project directories after removing running containers.
### WARNING: This is extremely destructive. It will remove local project
### directories. Any work that is not pushed to a remote git
### repository will be lost!
###
###############################################################################
define destroy-template
destroy-$(1): clean
#rm -Rf $(1)
endef
$(foreach p,$(SUBPROJECTS),$(eval $(call destroy-template,$(p))))
.PHONY: destroy
destroy: network-remove $(foreach p,$(SUBPROJECTS),destroy-$(p))
###############################################################################
### Dynamically list all targets.
### See: https://stackoverflow.com/a/26339924
###############################################################################
.PHONY: list
list:
#$(MAKE) -pRrq -f $(MAKEFILE_LIST) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$#$$' | xargs -n 1
Any help would be appriciated.

Run the make file in a gitbash terminal or wsl2. The problem is that the make file doesn't convert the commands in it correctly.

I have got an alternat solution to install/run reaction-commerce on windows platform. Using npm and reaction-cli.
https://docs.reactioncommerce.com/docs/next/installation-windows
Still I'm looking for the solution / fix with docker.

Related

The order of execution in Makefile - "rm -rf" does not finish on time?

Makefile:
# Defines
BUILD_PATH ?= out
REPO_NAME ?= my_work_dir
REPO_URL ?= git#github.com:your_org/your_repo
REPO_BRANCH := main
### Functions
define clone_repository
echo Cloning repository...
git -C $(1) init --quiet
git -C $(1) remote add origin $(2)
git -C $(1) fetch origin --progress --quiet --depth 1 $(3)
git -C $(1) reset --quiet --hard FETCH_HEAD
endef
define get_sha1
$(2):=$(shell git -C $(1) rev-parse HEAD)
endef
### Targets
do_the_work:
# Prepare
rm -rf $(BUILD_PATH)/$(REPO_NAME)
mkdir -p $(BUILD_PATH)/$(REPO_NAME)
# Clone
$(call clone_repository,$(BUILD_PATH)/$(REPO_NAME),$(REPO_URL),$(REPO_BRANCH))
# SHA1
$(eval $(call get_sha1,$(BUILD_PATH)/$(REPO_NAME),REPO_BRANCH_SHA1))
# Do...
# do_something --sha1 REPO_BRANCH_SHA1
What I tried to do in the do_the_work is:
Step 1: create an empty dir
Step 2: clone repository
Step 3: get sha1 from repository
Step 4: do something with sha1 info
However, when I execute the do_the_work, I get error (please not that $(BUILD_PATH)/$(REPO_NAME) evaluates to out/my_work_dir):
fatal: cannot change to 'out/my_work_dir': No such file or directory
rm -rf out/my_work_dir
mkdir -p out/my_work_dir
echo Cloning repository...
.
.
.
etc
But, when I run the same command again, it executes OK! Is this because the out/my_work_dir is in place already? It also works if the dir is completely empty.
Seems like the Step 3 $(eval $(call get_sha1...) executed before the Step 1 mkdir finished? How do I fix this?
To me it seems that rm -rf gets prolonged if the dir does not exist at all.
The error is in the $(shell ...) invocation in define get_sha1 which runs when you define the function, not when you call it.
This seems like an overcomplication anyway; I would simply get rid of the define and replace the recipe with one where the result is stored in a shell variable.
do_the_work:
...
# SHA1
sha1=$$(git -C $(BUILD_PATH)/$(REPO_NAME) rev-parse HEAD); \
# Do...
# do_something --sha1 "$$sha1"

Make install succeeds in terminal command, but failed in shell script

I wrote an shell script to automatically installs Python on my linux, like this:
[ ! -d /usr/local/src/python ] && mkdir /usr/local/src/python
[ ! -d /usr/local/python ] && mkdir /usr/local/python
tar -xvpf ~/Python-2.7.3.tgz -C /usr/local/src/
cd /usr/local/src/python/Python-2.7.3 && ./configure --prefix=/usr/local/python && make && make install
But it failed and the errors like this:
make: *** no rule to make target 'install '. stop.
I find the operation make succeeds while make install failed, anybody knows how to fix it? Thanks!
Below is the result of
cat Makefile | grep -B5 -A15 install
# time you run the configure script. Ideally, you can do:
#
# ./configure
# make
# make test
# make install
#
# If you have a previous version of Python installed that you don't
# want to overwrite, you can use "make altinstall" instead of "make
# install". Refer to the "Installing" section in the README file for
# additional details.
#
# See also the section "Build instructions" in the README file.
# === Variables set by makesetup ===
MODOBJS= Modules/threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/zipimport.o Modules/symtablemodule.o Modules/xxsubtype.o
MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS)
# === Variables set by configure
VERSION= 2.7
srcdir= .
CC= gcc -pthread
--
SHELL= /bin/sh
# Use this to make a link between python$(VERSION) and python in $(BINDIR)
LN= ln
# Portable install script (configure doesn't always guess right)
INSTALL= /usr/bin/install -c
INSTALL_PROGRAM=${INSTALL}
INSTALL_SCRIPT= ${INSTALL}
INSTALL_DATA= ${INSTALL} -m 644
# Shared libraries must be installed with executable mode on some systems;
# rather than figuring out exactly which, we always give them executable mode.
# Also, making them read-only seems to be a good idea...
INSTALL_SHARED= ${INSTALL} -m 555
MAKESETUP= $(srcdir)/Modules/makesetup
# Compiler options
OPT= -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes
BASECFLAGS= -fno-strict-aliasing
CFLAGS= $(BASECFLAGS) -g -O2 $(OPT) $(EXTRA_CFLAGS)
# Both CPPFLAGS and LDFLAGS need to contain the shell's value for setup.py to
# be able to build extension modules using the directories specified in the
# environment variables
CPPFLAGS= -I. -IInclude -I$(srcdir)/Include
LDFLAGS=
--
#export MACOSX_DEPLOYMENT_TARGET
# Options to enable prebinding (for fast startup prior to Mac OS X 10.3)
OTHER_LIBTOOL_OPT=
# Environment to run shared python without installed libraries
RUNSHARED=
# Modes for directories, executables and data files created by the
# install process. Default to user-only-writable for all file types.
DIRMODE= 755
EXEMODE= 755
FILEMODE= 644
# configure script arguments
CONFIG_ARGS= '--prefix=/usr/local/pythonbvs'
# Subdirectories with code
SRCDIRS= Parser Grammar Objects Python Modules Mac
# Other subdirectories
SUBDIRSTOO= Include Lib Misc Demo
# Files and directories to be distributed
--
else \
$(BLDSHARED) -o $# $(LIBRARY_OBJS) $(MODLIBS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST); \
fi
libpython$(VERSION).dylib: $(LIBRARY_OBJS)
$(CC) -dynamiclib -Wl,-single_module $(LDFLAGS) -undefined dynamic_lookup -Wl,-install_name,$(prefix)/lib/libpython$(VERSION).dylib -Wl,-compatibility_version,$(VERSION) -Wl,-current_version,$(VERSION) -o $# $(LIBRARY_OBJS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST); \
libpython$(VERSION).sl: $(LIBRARY_OBJS)
$(LDSHARED) -o $# $(LIBRARY_OBJS) $(MODLIBS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST)
# Copy up the gdb python hooks into a position where they can be automatically
# loaded by gdb during Lib/test/test_gdb.py
#
# Distributors are likely to want to install this somewhere else e.g. relative
# to the stripped DWARF data for the shared library.
gdbhooks: $(BUILDPYTHON)-gdb.py
SRC_GDB_HOOKS=$(srcdir)/Tools/gdb/libpython.py
$(BUILDPYTHON)-gdb.py: $(SRC_GDB_HOOKS)
$(INSTALL_DATA) $(SRC_GDB_HOOKS) $(BUILDPYTHON)-gdb.py
# This rule is here for OPENSTEP/Rhapsody/MacOSX. It builds a temporary
# minimal framework (not including the Lib directory and such) in the current
# directory.
RESSRCDIR=Mac/Resources/framework
$(PYTHONFRAMEWORKDIR)/Versions/$(VERSION)/$(PYTHONFRAMEWORK): \
$(LIBRARY) \
$(RESSRCDIR)/Info.plist
$(INSTALL) -d -m $(DIRMODE) $(PYTHONFRAMEWORKDIR)/Versions/$(VERSION)
$(CC) -o $(LDLIBRARY) $(LDFLAGS) -dynamiclib \
-all_load $(LIBRARY) -Wl,-single_module \
-install_name $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Versions/$(VERSION)/$(PYTHONFRAMEWORK) \
-compatibility_version $(VERSION) \
-current_version $(VERSION);
$(INSTALL) -d -m $(DIRMODE) \
$(PYTHONFRAMEWORKDIR)/Versions/$(VERSION)/Resources/English.lproj
$(INSTALL_DATA) $(RESSRCDIR)/Info.plist \
$(PYTHONFRAMEWORKDIR)/Versions/$(VERSION)/Resources/Info.plist
$(LN) -fsn $(VERSION) $(PYTHONFRAMEWORKDIR)/Versions/Current
$(LN) -fsn Versions/Current/$(PYTHONFRAMEWORK) $(PYTHONFRAMEWORKDIR)/$(PYTHONFRAMEWORK)
$(LN) -fsn Versions/Current/Headers $(PYTHONFRAMEWORKDIR)/Headers
$(LN) -fsn Versions/Current/Resources $(PYTHONFRAMEWORKDIR)/Resources
# This rule builds the Cygwin Python DLL and import library if configured
# for a shared core library; otherwise, this rule is a noop.
$(DLLLIBRARY) libpython$(VERSION).dll.a: $(LIBRARY_OBJS)
if test -n "$(DLLLIBRARY)"; then \
--
-rm -f $(srcdir)/Lib/test/*.py[co]
-$(TESTPYTHON) $(TESTPROG) $(MEMTESTOPTS)
$(TESTPYTHON) $(TESTPROG) $(MEMTESTOPTS)
# Install everything
install: altinstall bininstall maninstall
# Install almost everything without disturbing previous versions
altinstall: altbininstall libinstall inclinstall libainstall \
sharedinstall oldsharedinstall
# Install shared libraries enabled by Setup
DESTDIRS= $(exec_prefix) $(LIBDIR) $(BINLIBDEST) $(DESTSHARED)
oldsharedinstall: $(DESTSHARED) $(SHAREDMODS)
#for i in X $(SHAREDMODS); do \
if test $$i != X; then \
echo $(INSTALL_SHARED) $$i $(DESTSHARED)/`basename $$i`; \
$(INSTALL_SHARED) $$i $(DESTDIR)$(DESTSHARED)/`basename $$i`; \
fi; \
done
$(DESTSHARED):
#for i in $(DESTDIRS); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
# Install the interpreter by creating a symlink chain:
# $(PYTHON) -> python2 -> python$(VERSION))
# Also create equivalent chains for other installed files
bininstall: altbininstall
-if test -f $(DESTDIR)$(BINDIR)/$(PYTHON) -o -h $(DESTDIR)$(BINDIR)/$(PYTHON); \
then rm -f $(DESTDIR)$(BINDIR)/$(PYTHON); \
else true; \
fi
(cd $(DESTDIR)$(BINDIR); $(LN) -s python2$(EXE) $(PYTHON))
-rm -f $(DESTDIR)$(BINDIR)/python2$(EXE)
(cd $(DESTDIR)$(BINDIR); $(LN) -s python$(VERSION)$(EXE) python2$(EXE))
-rm -f $(DESTDIR)$(BINDIR)/python2-config
(cd $(DESTDIR)$(BINDIR); $(LN) -s python$(VERSION)-config python2-config)
-rm -f $(DESTDIR)$(BINDIR)/python-config
(cd $(DESTDIR)$(BINDIR); $(LN) -s python2-config python-config)
-test -d $(DESTDIR)$(LIBPC) || $(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$(LIBPC)
-rm -f $(DESTDIR)$(LIBPC)/python2.pc
(cd $(DESTDIR)$(LIBPC); $(LN) -s python-$(VERSION).pc python2.pc)
-rm -f $(DESTDIR)$(LIBPC)/python.pc
(cd $(DESTDIR)$(LIBPC); $(LN) -s python2.pc python.pc)
# Install the interpreter with $(VERSION) affixed
# This goes into $(exec_prefix)
altbininstall: $(BUILDPYTHON)
#for i in $(BINDIR) $(LIBDIR); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
$(INSTALL_PROGRAM) $(BUILDPYTHON) $(DESTDIR)$(BINDIR)/python$(VERSION)$(EXE)
if test -f $(LDLIBRARY); then \
if test -n "$(DLLLIBRARY)" ; then \
$(INSTALL_SHARED) $(DLLLIBRARY) $(DESTDIR)$(BINDIR); \
else \
$(INSTALL_SHARED) $(LDLIBRARY) $(DESTDIR)$(LIBDIR)/$(INSTSONAME); \
if test $(LDLIBRARY) != $(INSTSONAME); then \
--
fi; \
else true; \
fi
# Install the manual page
maninstall:
#for i in $(MANDIR) $(MANDIR)/man1; \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
$(INSTALL_DATA) $(srcdir)/Misc/python.man \
$(DESTDIR)$(MANDIR)/man1/python$(VERSION).1
# Install the library
PLATDIR= plat-$(MACHDEP)
EXTRAPLATDIR=
EXTRAMACHDEPPATH=
--
distutils distutils/command distutils/tests $(XMLLIBSUBDIRS) \
multiprocessing multiprocessing/dummy \
unittest unittest/test \
lib-old \
curses pydoc_data $(MACHDEPS)
libinstall: build_all $(srcdir)/Lib/$(PLATDIR) $(srcdir)/Modules/xxmodule.c
#for i in $(SCRIPTDIR) $(LIBDEST); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
#for d in $(LIBSUBDIRS); \
do \
a=$(srcdir)/Lib/$$d; \
if test ! -d $$a; then continue; else true; fi; \
b=$(LIBDEST)/$$d; \
if test ! -d $(DESTDIR)$$b; then \
echo "Creating directory $$b"; \
--
# is not available in configure
sed -e "s,#EXENAME#,$(BINDIR)/python$(VERSION)$(EXE)," < $(srcdir)/Misc/python-config.in >python-config
# Install the include files
INCLDIRSTOMAKE=$(INCLUDEDIR) $(CONFINCLUDEDIR) $(INCLUDEPY) $(CONFINCLUDEPY)
inclinstall:
#for i in $(INCLDIRSTOMAKE); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
#for i in $(srcdir)/Include/*.h; \
do \
echo $(INSTALL_DATA) $$i $(INCLUDEPY); \
$(INSTALL_DATA) $$i $(DESTDIR)$(INCLUDEPY); \
done
$(INSTALL_DATA) pyconfig.h $(DESTDIR)$(CONFINCLUDEPY)/pyconfig.h
--
LIBPL= $(LIBP)/config
# pkgconfig directory
LIBPC= $(LIBDIR)/pkgconfig
libainstall: all python-config
#for i in $(LIBDIR) $(LIBP) $(LIBPL) $(LIBPC); \
do \
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
#if test -d $(LIBRARY); then :; else \
if test "$(PYTHONFRAMEWORKDIR)" = no-framework; then \
if test "$(SO)" = .dll; then \
$(INSTALL_DATA) $(LDLIBRARY) $(DESTDIR)$(LIBPL) ; \
else \
$(INSTALL_DATA) $(LIBRARY) $(DESTDIR)$(LIBPL)/$(LIBRARY) ; \
$(RANLIB) $(DESTDIR)$(LIBPL)/$(LIBRARY) ; \
fi; \
else \
echo Skip install of $(LIBRARY) - use make frameworkinstall; \
fi; \
fi
$(INSTALL_DATA) Modules/config.c $(DESTDIR)$(LIBPL)/config.c
$(INSTALL_DATA) Modules/python.o $(DESTDIR)$(LIBPL)/python.o
$(INSTALL_DATA) $(srcdir)/Modules/config.c.in $(DESTDIR)$(LIBPL)/config.c.in
$(INSTALL_DATA) Makefile $(DESTDIR)$(LIBPL)/Makefile
$(INSTALL_DATA) Modules/Setup $(DESTDIR)$(LIBPL)/Setup
$(INSTALL_DATA) Modules/Setup.local $(DESTDIR)$(LIBPL)/Setup.local
$(INSTALL_DATA) Modules/Setup.config $(DESTDIR)$(LIBPL)/Setup.config
$(INSTALL_DATA) Misc/python.pc $(DESTDIR)$(LIBPC)/python-$(VERSION).pc
$(INSTALL_SCRIPT) $(srcdir)/Modules/makesetup $(DESTDIR)$(LIBPL)/makesetup
$(INSTALL_SCRIPT) $(srcdir)/install-sh $(DESTDIR)$(LIBPL)/install-sh
$(INSTALL_SCRIPT) python-config $(DESTDIR)$(BINDIR)/python$(VERSION)-config
rm python-config
#if [ -s Modules/python.exp -a \
"`echo $(MACHDEP) | sed 's/^\(...\).*/\1/'`" = "aix" ]; then \
echo; echo "Installing support files for building shared extension modules on AIX:"; \
$(INSTALL_DATA) Modules/python.exp \
$(DESTDIR)$(LIBPL)/python.exp; \
echo; echo "$(LIBPL)/python.exp"; \
$(INSTALL_SCRIPT) $(srcdir)/Modules/makexp_aix \
$(DESTDIR)$(LIBPL)/makexp_aix; \
echo "$(LIBPL)/makexp_aix"; \
$(INSTALL_SCRIPT) $(srcdir)/Modules/ld_so_aix \
$(DESTDIR)$(LIBPL)/ld_so_aix; \
echo "$(LIBPL)/ld_so_aix"; \
echo; echo "See Misc/AIX-NOTES for details."; \
--
;; \
esac
# Install the dynamically loadable modules
# This goes into $(exec_prefix)
sharedinstall: sharedmods
$(RUNSHARED) ./$(BUILDPYTHON) -E $(srcdir)/setup.py install \
--prefix=$(prefix) \
--install-scripts=$(BINDIR) \
--install-platlib=$(DESTSHARED) \
--root=$(DESTDIR)/
# Here are a couple of targets for MacOSX again, to install a full
# framework-based Python. frameworkinstall installs everything, the
# subtargets install specific parts. Much of the actual work is offloaded to
# the Makefile in Mac
#
#
# This target is here for backward compatiblity, previous versions of Python
# hadn't integrated framework installation in the normal install process.
frameworkinstall: install
# On install, we re-make the framework
# structure in the install location, /Library/Frameworks/ or the argument to
# --enable-framework. If --enable-framework has been specified then we have
# automatically set prefix to the location deep down in the framework, so we
# only have to cater for the structural bits of the framework.
frameworkinstallframework: frameworkinstallstructure install frameworkinstallmaclib
frameworkinstallstructure: $(LDLIBRARY)
#if test "$(PYTHONFRAMEWORKDIR)" = no-framework; then \
echo Not configured with --enable-framework; \
exit 1; \
else true; \
fi
#for i in $(prefix)/Resources/English.lproj $(prefix)/lib; do\
if test ! -d $(DESTDIR)$$i; then \
echo "Creating directory $(DESTDIR)$$i"; \
$(INSTALL) -d -m $(DIRMODE) $(DESTDIR)$$i; \
else true; \
fi; \
done
$(LN) -fsn include/python$(VERSION) $(DESTDIR)$(prefix)/Headers
sed 's/%VERSION%/'"`$(RUNSHARED) ./$(BUILDPYTHON) -c 'import platform; print platform.python_version()'`"'/g' < $(RESSRCDIR)/Info.plist > $(DESTDIR)$(prefix)/Resources/Info.plist
$(LN) -fsn $(VERSION) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Versions/Current
$(LN) -fsn Versions/Current/$(PYTHONFRAMEWORK) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/$(PYTHONFRAMEWORK)
$(LN) -fsn Versions/Current/Headers $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Headers
$(LN) -fsn Versions/Current/Resources $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Resources
$(INSTALL_SHARED) $(LDLIBRARY) $(DESTDIR)$(PYTHONFRAMEWORKPREFIX)/$(LDLIBRARY)
# This installs Mac/Lib into the framework
# Install a number of symlinks to keep software that expects a normal unix
# install (which includes python-config) happy.
frameworkinstallmaclib:
ln -fs "../../../$(PYTHONFRAMEWORK)" "$(DESTDIR)$(prefix)/lib/python$(VERSION)/config/libpython$(VERSION).a"
ln -fs "../../../$(PYTHONFRAMEWORK)" "$(DESTDIR)$(prefix)/lib/python$(VERSION)/config/libpython$(VERSION).dylib"
ln -fs "../$(PYTHONFRAMEWORK)" "$(DESTDIR)$(prefix)/lib/libpython$(VERSION).dylib"
cd Mac && $(MAKE) installmacsubtree DESTDIR="$(DESTDIR)"
# This installs the IDE, the Launcher and other apps into /Applications
frameworkinstallapps:
cd Mac && $(MAKE) installapps DESTDIR="$(DESTDIR)"
# This install the unix python and pythonw tools in /usr/local/bin
frameworkinstallunixtools:
cd Mac && $(MAKE) installunixtools DESTDIR="$(DESTDIR)"
frameworkaltinstallunixtools:
cd Mac && $(MAKE) altinstallunixtools DESTDIR="$(DESTDIR)"
# This installs the Demos and Tools into the applications directory.
# It is not part of a normal frameworkinstall
frameworkinstallextras:
cd Mac && $(MAKE) installextras DESTDIR="$(DESTDIR)"
# This installs a few of the useful scripts in Tools/scripts
scriptsinstall:
SRCDIR=$(srcdir) $(RUNSHARED) \
./$(BUILDPYTHON) $(srcdir)/Tools/scripts/setup.py install \
--prefix=$(prefix) \
--install-scripts=$(BINDIR) \
--root=$(DESTDIR)/
# Build the toplevel Makefile
Makefile.pre: Makefile.pre.in config.status
CONFIG_FILES=Makefile.pre CONFIG_HEADERS= $(SHELL) config.status
$(MAKE) -f Makefile.pre Makefile
# Run the configure script.
config.status: $(srcdir)/configure
$(SHELL) $(srcdir)/configure $(CONFIG_ARGS)
.PRECIOUS: config.status $(BUILDPYTHON) Makefile Makefile.pre
# Some make's put the object file in the current directory
.c.o:
--
Python/thread.o: $(srcdir)/Python/thread_atheos.h $(srcdir)/Python/thread_beos.h $(srcdir)/Python/thread_cthread.h $(srcdir)/Python/thread_foobar.h $(srcdir)/Python/thread_lwp.h $(srcdir)/Python/thread_nt.h $(srcdir)/Python/thread_os2.h $(srcdir)/Python/thread_pth.h $(srcdir)/Python/thread_pthread.h $(srcdir)/Python/thread_sgi.h $(srcdir)/Python/thread_solaris.h $(srcdir)/Python/thread_wince.h
# Declare targets that aren't real files
.PHONY: all build_all sharedmods oldsharedmods test quicktest memtest
.PHONY: install altinstall oldsharedinstall bininstall altbininstall
.PHONY: maninstall libinstall inclinstall libainstall sharedinstall
.PHONY: frameworkinstall frameworkinstallframework frameworkinstallstructure
.PHONY: frameworkinstallmaclib frameworkinstallapps frameworkinstallunixtools
.PHONY: frameworkaltinstallunixtools recheck autoconf clean clobber distclean
.PHONY: smelly funny patchcheck
.PHONY: gdbhooks
# IF YOU PUT ANYTHING HERE IT WILL GO AWAY
# Rules appended by makedepend
Modules/threadmodule.o: $(srcdir)/Modules/threadmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/threadmodule.c -o Modules/threadmodule.o
Modules/threadmodule$(SO): Modules/threadmodule.o; $(BLDSHARED) Modules/threadmodule.o -o Modules/threadmodule$(SO)
Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/signalmodule.c -o Modules/signalmodule.o
Modules/signalmodule$(SO): Modules/signalmodule.o; $(BLDSHARED) Modules/signalmodule.o -o Modules/signalmodule$(SO)
Modules/posixmodule.o: $(srcdir)/Modules/posixmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/posixmodule.c -o Modules/posixmodule.o
Modules/posixmodule$(SO): Modules/posixmodule.o; $(BLDSHARED) Modules/posixmodule.o -o Modules/posixmodule$(SO)
Modules/errnomodule.o: $(srcdir)/Modules/errnomodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/errnomodule.c -o Modules/errnomodule.o
Modules/errnomodule$(SO): Modules/errnomodule.o; $(BLDSHARED) Modules/errnomodule.o -o Modules/errnomodule$(SO)

Makefile not recognizing already generated sources

I'm trying to generate an RPM with a makefile, the behavior I'm expecting from the makefile is the following:
If RPM doesn't exist and the sources are not yet prepared, go ahead and do both: generate sources and then the RPM
If RPM already exist but the sources changed, go ahead and prepare the sources and generate the RPM once again
If the sources haven't changed and the RPM already exist don't do anything
However, right now the behavior I'm getting from the makefile below is not quite the way I want it, as it recognizes whether the RPM exist but when it comes down to the sources it doesn't really recognize that they already exist.
Here's the makefile:
SHELL = /bin/bash
.SHELLFLAGS = -o pipefail -c
COLORIZE := 2>&1 | sed -re "s/^(Executing|Wrote)(.*: )/"$$'\E'"[32m\1\2"$$'\E'"[0m/g" \
-e "s/(error[s]?)/"$$'\E'"[31m\1"$$'\E'"[0m/ig" \
-e "s/(warn|warning)/"$$'\E'"[33m\1"$$'\E'"[0m/ig"
SPEC := $(shell find . -name \*spec -printf '%f' -quit)
ARCH := $(shell rpm -q --qf '%{arch}' --specfile $(SPEC))
DIST := .el
NAME := $(basename $(SPEC))
RELEASE := $(shell rpm -q --qf '%{release}' --specfile $(SPEC) | cut -d. -f1)
VERSION := $(shell rpm -q --qf '%{version}' --specfile $(SPEC))
BUILDDIR := ./rpm-build
RPM := $(BUILDDIR)/RPMS/$(ARCH)/$(NAME)-$(VERSION)-$(RELEASE)$(DIST).$(ARCH).rpm
RPMBUILD := rpmbuild --define "_topdir %(pwd)/$(BUILDDIR)" \
--define "_source_filedigest_algorithm md5" \
--define "_binary_filedigest_algorithm md5" \
--define "_source_payload w9.gzdio" \
--define "_binary_payload w9.gzdio" \
--define "_sourcedir %{_topdir}/SOURCES" \
--define "_target_os linux" \
--define "dist .el"
SOURCE0 := $(BUILDDIR)/SOURCES/$(NAME)-$(VERSION).jar
.PHONY: all clean
all: $(RPM)
$(BUILDDIR):
#mkdir -p $(BUILDDIR)/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS,TEMP}
$(SOURCE0): $(BUILDDIR) $(SPEC)
spectool -g -C $(BUILDDIR)/SOURCES $(SPEC)
$(RPM): $(SPEC) $(SOURCE0)
#echo -e "Building $(RPM)"
$(RPMBUILD) -bb $< $(COLORIZE)
clean:
#- $(RM) -rf ./$(BUILDDIR)
Is there anything I'm doing wrong for managing the sources, I just don't want them to be prepared everytime I run a make command ?
You should never have a target with a directory as a prerequisite, because directory timestamps are updated at unusual times. I shouldn't say "never"; it can be very useful but it means something quite different than what you think.
You can try using an order-only prerequisite for this:
$(BUILDDIR):
#mkdir -p $(BUILDDIR)/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS,TEMP}
$(SOURCE0): $(SPEC) | $(BUILDDIR)
spectool -g -C $(BUILDDIR)/SOURCES $(SPEC)
Or you can just put the mkdir inside the recipe:
$(SOURCE0): $(SPEC)
#mkdir -p $(BUILDDIR)/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS,TEMP}
spectool -g -C $(BUILDDIR)/SOURCES $(SPEC)
Note, you're using bash-specific issues here ({}). If you want to be portable you need to add:
SHELL := /bin/bash
to your makefile.

Makefile:108: *** recipe commences before first target

GNU Make 4.1 Built for x86_64-pc-linux-gnu
Below is the Makefile:
# Project variables
PROJECT_NAME ?= todobackend
ORG_NAME ?= shamdockerhub
REPO_NAME ?= todobackend
# File names
DEV_COMPOSE_FILE := docker/dev/docker-compose.yml
REL_COMPOSE_FILE := docker/release/docker-compose.yml
# Docker compose project names
REL_PROJECT := $(PROJECT_NAME)$(BUILD_ID)
DEV_PROJECT := $(REL_PROJECT)dev
# Check and inspect logic
INSPECT := $$(docker-compose -p $$1 -f $$2 ps -q $$3 | xargs -I ARGS docker inspect -f "{{ .State.ExitCode }}" ARGS)
CHECK := #bash -c '\
if [[ $(INSPECT) -ne 0 ]]; \
then exit $(INSPECT); fi' VALUE
# Use these settings to specify a custom Docker registry
DOCKER_REGISTRY ?= docker.io
APP_SERVICE_NAME := app
.PHONY: test build release clean tag
test: # Run unit & integration test cases
${INFO} "Pulling latest images..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) pull
${INFO} "Building images..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build cache
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build --pull test
${INFO} "Ensuring database is ready..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) run --rm agent
${INFO} "Running tests..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) up test
# docker cp $$(docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) ps -q test):/reports/. reports
${CHECK} ${DEV_PROJECT} ${DEV_COMPOSE_FILE} test
${INFO} "Testing complete"
build: # Create deployable artifact and copy to ../target folder
${INFO} "Creating builder image..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) build builder
${INFO} "Building application artifacts..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) up builder
${CHECK} ${DEV_PROJECT} ${DEV_COMPOSE_FILE} builder
${INFO} "Copying artifacts to target folder..."
# docker cp $$(docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) ps -q builder):/wheelhouse/. target
${INFO} "Build complete"
release: # Creates release environment, bootstrap the environment
${INFO} "Building images..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build webroot
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build app
${INFO} "Ensuring database is ready..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm agent
${INFO} "Collecting static files..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm app manage.py collectstatic --noinput
${INFO} "Running database migrations..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) run --rm app manage.py migrate --noinput
${INFO} "Pull external image and build..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) build --pull nginx
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) pull test
${INFO} "Running acceptance tests..."
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) up test
${CHECK} $(REL_PROJECT) $(REL_COMPOSE_FILE) test
# docker cp $$(docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) ps -q test):/reports/. reports
${INFO} "Acceptance testing complete"
clean:
${INFO} "Destroying development environment..."
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) kill
# docker-compose -p $(DEV_PROJECT) -f $(DEV_COMPOSE_FILE) rm -f -v
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) kill
# docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) rm -f -v
# docker images -q -f dangling=true -f label=application=$(REPO_NAME) | xargs -I ARGS docker rmi -f ARGS
${INFO} "Clean complete"
tag:
$(INFO) "Tagging release image with tags $(TAG_ARGS)"
# $(foreach tag, $(TAG_ARGS), docker tag $(IMAGE_ID) $(DOCKER_REGISTRY)/$(ORG_NAME)/$(REPO_NAME):$(tag);)
${INFO} "Tagging complete"
# Cosmetics
YELLOW := "\e[1;33m"
NC := "\e[0m"
# Shell functions
INFO := #bash -c '\
printf $(YELLOW); \
echo "=> $$1"; \
printf $(NC)' VALUE
# Get container id of application service container
APP_CONTAINER_ID := $$(docker-compose -p $(REL_PROJECT) -f $(REL_COMPOSE_FILE) ps -q $(APP_SERVICE_NAME))
# Get image id of application service
IMAGE_ID := $$(docker inspect -f '{{ .Image }}' $(APP_CONTAINER_ID))
# Extract tag arguments
ifeq (tag, $(firstword $(MAKECMDGOALS)))
TAG_ARGS := $(wordlist 2, $(words $(MAKECMDGOALS)), $(MAKECMDGOALS))
ifeq ($(TAG_ARGS),)
$(error You must specify a tag)
endif
$(eval $(TAG_ARGS):;#:) # line 108 Do not interpret "0.1 latest whatever" as make target files
endif
Below is the error on running make command:
$ make tag 0.1 latest $(git rev-parse --short HEAD)
Makefile:108: *** recipe commences before first target. Stop.
Line 108, purpose of $(eval $(TAG_ARGS):;#:) to convey that 0.1 latest $(git rev-parse --short HEAD) are not make targets.
Why $(eval $(TAG_ARGS):;#:) gives error?
That particular error happens because your $(eval ...) line is indented by a TAB (something that it's hidden by this horribly broken web interface).
Example:
$ make -f <(printf '\t$(eval foo:;echo yup)')
/dev/fd/63:1: *** recipe commences before first target. Stop.
# now with spaces instead of TAB
$ make -f <(printf ' $(eval foo:;echo yup)')
echo yup
yup
The error is documented in the make manual:
recipe commences before first target. Stop.
This means the first thing in the makefile seems to be part of a
recipe: it begins with a recipe prefix character and doesn't appear
to be a legal make directive (such as a variable assignment).
Recipes must always be associated with a target.
The "recipe prefix character" is TAB by default.
$ make -f <(printf '\tfoo')
/dev/fd/63:1: *** recipe commences before first target. Stop.
It doesn't have to be the "first thing in the makefile", though: the same error will trigger after a number of rules, if preceded by a directive like a macro assignment or such:
$ make -f <(printf 'all:;\nkey=val\n\tfoo')
/dev/fd/63:3: *** recipe commences before first target. Stop.
And even if a macro expands to an empty string, GNU make will not consider empty a line containing just macros expanding to empty strings:
$ make -f <(printf '\t\nfoo:;#:')
$ make -f <(printf '\t$(info foo)\nfoo:;#:')
/dev/fd/63:1: *** recipe commences before first target. Stop.
$ make -f <(printf ' $(info foo)\nfoo:;#:')
foo
I can't reproduce this problem. I put your last ifeq statement into a makefile and it works fine for me with GNU make 4.1 and 4.2.1. There must be something more unusual about your situation.
The classic way to debug issues with eval is to duplicate the line and replace the eval with info; this way make will print out exactly what it sees. Often this will show you what is wrong.
There are other confusing things about this makefile.
First, why are you using eval here in the first place? Why not just write the rule directly? There's nothing wrong with:
$(TAG_ARGS):;#:
no need to wrap it in an eval.
Second, why are you using := then escaping the variables? Why not just use = instead and not bother with the escapes?
INSPECT = $(docker-compose -p $1 -f $2 ps -q $3 | xargs -I ARGS docker inspect -f "{{ .State.ExitCode }}" ARGS)
works just fine.
Finally, I strongly urge you to not add # to your recipes. It makes debugging makefiles very difficult and frustrating. Instead consider using a method such as Managing Recipe Echoing to handle this.

How to install cross compiled cups to target board?

I Cross compiled cups 1.7.0 for sitara arm linux 6
I followed
./configure --host=arm-linux-gnueabihf --disable-gssapi --prefix=/media/rootfs
make
make install
All cups related files are automatically saved in sd card ,
but it shows error on typing cupsd command and not starting cups server
cupsd: Child exited on signal 1.
On checking /etc/cups/cupsd.conf, several paths in the configuration files are
/media/rootfs/var/run/cups/cups.sock
instead of
/var/run/cups/cups.sock
1)how to install this compiled cups to target board without --prefix?
2)Is there any step missing for cross compilation?
Any help will be thankfull.
This is Makefile
#
# "$Id: Makefile 11107 2013-07-08 13:47:51Z msweet $"
#
# Top-level Makefile for CUPS.
#
# Copyright 2007-2013 by Apple Inc.
# Copyright 1997-2007 by Easy Software Products, all rights reserved.
#
# These coded instructions, statements, and computer programs are the
# property of Apple Inc. and are protected by Federal copyright
# law. Distribution and use rights are outlined in the file "LICENSE.txt"
# which should have been included with this file. If this file is
# file is missing or damaged, see the license at "http://www.cups.org/".
#
include Makedefs
#
# Directories to make...
#
DIRS = cups test $(BUILDDIRS)
#
# Make all targets...
#
all:
chmod +x cups-config
echo Using ARCHFLAGS="$(ARCHFLAGS)"
echo Using ALL_CFLAGS="$(ALL_CFLAGS)"
echo Using ALL_CXXFLAGS="$(ALL_CXXFLAGS)"
echo Using CC="$(CC)"
echo Using CXX="$(CC)"
echo Using DSOFLAGS="$(DSOFLAGS)"
echo Using LDFLAGS="$(LDFLAGS)"
echo Using LIBS="$(LIBS)"
for dir in $(DIRS); do\
echo Making all in $$dir... ;\
(cd $$dir ; $(MAKE) $(MFLAGS) all $(UNITTESTS)) || exit 1;\
done
#
# Make library targets...
#
libs:
echo Using ARCHFLAGS="$(ARCHFLAGS)"
echo Using ALL_CFLAGS="$(ALL_CFLAGS)"
echo Using ALL_CXXFLAGS="$(ALL_CXXFLAGS)"
echo Using CC="$(CC)"
echo Using CXX="$(CC)"
echo Using DSOFLAGS="$(DSOFLAGS)"
echo Using LDFLAGS="$(LDFLAGS)"
echo Using LIBS="$(LIBS)"
for dir in $(DIRS); do\
echo Making libraries in $$dir... ;\
(cd $$dir ; $(MAKE) $(MFLAGS) libs) || exit 1;\
done
#
# Make unit test targets...
#
unittests:
echo Using ARCHFLAGS="$(ARCHFLAGS)"
echo Using ALL_CFLAGS="$(ALL_CFLAGS)"
echo Using ALL_CXXFLAGS="$(ALL_CXXFLAGS)"
echo Using CC="$(CC)"
echo Using CXX="$(CC)"
echo Using DSOFLAGS="$(DSOFLAGS)"
echo Using LDFLAGS="$(LDFLAGS)"
echo Using LIBS="$(LIBS)"
for dir in $(DIRS); do\
echo Making all in $$dir... ;\
(cd $$dir ; $(MAKE) $(MFLAGS) unittests) || exit 1;\
done
#
# Remove object and target files...
#
clean:
for dir in $(DIRS); do\
echo Cleaning in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) clean) || exit 1;\
done
#
# Remove all non-distribution files...
#
distclean: clean
$(RM) Makedefs config.h config.log config.status
$(RM) cups-config
$(RM) conf/cupsd.conf conf/mime.convs conf/pam.std conf/snmp.conf
$(RM) doc/help/ref-cupsd-conf.html doc/help/standard.html doc/index.html
$(RM) man/client.conf.man
$(RM) man/cups-deviced.man man/cups-driverd.man
$(RM) man/cups-lpd.man man/cupsaddsmb.man man/cupsd.man
$(RM) man/cupsd.conf.man man/drv.man man/lpoptions.man
$(RM) packaging/cups.list
$(RM) packaging/cups-desc.plist packaging/cups-info.plist
$(RM) templates/header.tmpl
$(RM) desktop/cups.desktop
$(RM) scheduler/cups.sh scheduler/cups-lpd.xinetd
$(RM) scheduler/org.cups.cups-lpd.plist scheduler/cups.xml
-$(RM) doc/*/index.html
-$(RM) templates/*/header.tmpl
-$(RM) -r autom4te*.cache clang cups/charmaps cups/locale driver/test
#
# Make dependencies
#
depend:
for dir in $(DIRS); do\
echo Making dependencies in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) depend) || exit 1;\
done
#
# Run the clang.llvm.org static code analysis tool on the C sources.
# (at least checker-231 is required for scan-build to work this way)
#
.PHONY: clang clang-changes
clang:
$(RM) -r clang
scan-build -V -k -o `pwd`/clang $(MAKE) $(MFLAGS) clean all
clang-changes:
scan-build -V -k -o `pwd`/clang $(MAKE) $(MFLAGS) all
#
# Generate a ctags file...
#
ctags:
ctags -R .
#
# Install everything...
#
install: install-data install-headers install-libs install-exec
#
# Install data files...
#
install-data:
echo Making all in cups...
(cd cups; $(MAKE) $(MFLAGS) all)
for dir in $(DIRS); do\
echo Installing data files in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) install-data) || exit 1;\
done
echo Installing cups-config script...
$(INSTALL_DIR) -m 755 $(BINDIR)
$(INSTALL_SCRIPT) cups-config $(BINDIR)/cups-config
#
# Install header files...
#
install-headers:
for dir in $(DIRS); do\
echo Installing header files in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) install-headers) || exit 1;\
done
if test "x$(privateinclude)" != x; then \
echo Installing config.h into $(PRIVATEINCLUDE)...; \
$(INSTALL_DIR) -m 755 $(PRIVATEINCLUDE); \
$(INSTALL_DATA) config.h $(PRIVATEINCLUDE)/config.h; \
fi
#
# Install programs...
#
install-exec: all
for dir in $(DIRS); do\
echo Installing programs in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) install-exec) || exit 1;\
done
#
# Install libraries...
#
install-libs: libs
for dir in $(DIRS); do\
echo Installing libraries in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) install-libs) || exit 1;\
done
#
# Uninstall object and target files...
#
uninstall:
for dir in $(DIRS); do\
echo Uninstalling in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) uninstall) || exit 1;\
done
echo Uninstalling cups-config script...
$(RM) $(BINDIR)/cups-config
-$(RMDIR) $(BINDIR)
#
# Run the test suite...
#
test: all unittests
echo Running CUPS test suite...
cd test; ./run-stp-tests.sh
check: all unittests
echo Running CUPS test suite with defaults...
cd test; ./run-stp-tests.sh 1 0 n n
debugcheck: all unittests
echo Running CUPS test suite with debug printfs...
cd test; ./run-stp-tests.sh 1 0 n y
#
# Create HTML documentation using Mini-XML's mxmldoc (http://www.msweet.org/)...
#
apihelp:
for dir in cgi-bin cups filter ppdc scheduler; do\
echo Generating API help in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) apihelp) || exit 1;\
done
framedhelp:
for dir in cgi-bin cups filter ppdc scheduler; do\
echo Generating framed API help in $$dir... ;\
(cd $$dir; $(MAKE) $(MFLAGS) framedhelp) || exit 1;\
done
#
# Create an Xcode docset using Mini-XML's mxmldoc (http://www.msweet.org/)...
#
docset: apihelp
echo Generating docset directory tree...
$(RM) -r org.cups.docset
mkdir -p org.cups.docset/Contents/Resources/Documentation/help
mkdir -p org.cups.docset/Contents/Resources/Documentation/images
cd man; $(MAKE) $(MFLAGS) html
cd doc; $(MAKE) $(MFLAGS) docset
cd cgi-bin; $(MAKE) $(MFLAGS) makedocset
cgi-bin/makedocset org.cups.docset \
`svnversion . | sed -e '1,$$s/[a-zA-Z]//g'` \
doc/help/api-*.tokens
$(RM) doc/help/api-*.tokens
echo Indexing docset...
/Applications/Xcode.app/Contents/Developer/usr/bin/docsetutil index org.cups.docset
echo Generating docset archive and feed...
$(RM) org.cups.docset.atom
/Applications/Xcode.app/Contents/Developer/usr/bin/docsetutil package --output org.cups.docset.xar \
--atom org.cups.docset.atom \
--download-url http://www.cups.org/org.cups.docset.xar \
org.cups.docset
#
# Lines of code computation...
#
sloc:
for dir in cups scheduler; do \
(cd $$dir; $(MAKE) $(MFLAGS) sloc) || exit 1;\
done
#
# Make software distributions using EPM (http://www.msweet.org/)...
#
EPMFLAGS = -v --output-dir dist $(EPMARCH)
aix bsd deb depot inst pkg setld slackware swinstall tardist:
epm $(EPMFLAGS) -f $# cups packaging/cups.list
epm:
epm $(EPMFLAGS) -s packaging/installer.gif cups packaging/cups.list
rpm:
epm $(EPMFLAGS) -f rpm -s packaging/installer.gif cups packaging/cups.list
.PHONEY: dist
dist: all
$(RM) -r dist
$(MAKE) $(MFLAGS) epm
case `uname` in \
*BSD*) $(MAKE) $(MFLAGS) bsd;; \
Darwin*) $(MAKE) $(MFLAGS) osx;; \
Linux*) test ! -x /usr/bin/rpm || $(MAKE) $(MFLAGS) rpm;; \
SunOS*) $(MAKE) $(MFLAGS) pkg;; \
esac
#
# Don't run top-level build targets in parallel...
#
.NOTPARALLEL:
#
# End of "$Id: Makefile 11107 2013-07-08 13:47:51Z msweet $".
#
DSTROOT=/media/rootfs option in make install is the solution for cups.
./configure --host=arm-linux-gnueabihf --disable-gssapi --libdir=/usr/lib
make
sudo make install DSTROOT=/media/rootfs
Dont use --prefix.
cupsd: Child exited on signal 1.
This error is due to hardcoding of paths.You can avoid it by placing your installed executables in same directory by creating it in your rootfs.
There are two way to avoid above 1) while configuring the source-code give the prefixsome other folder like $HOME/cups so that this will be a standalone. create directory by the same name $HOME/cups same as you prefix in your rootfs copy this standalone executable as u mentioned in prefix.
e.g --prefix=/home/vinay/cups then make same directory path in your rootfs /home/vinay/cups and copy all your executables here.
For second way i am not sure it will work or not since for qt project i tried and its worked.By Use of sysroot
2)provide option --sysroot=/media/rootfs --prefix=/media/rootfs so that while executing ypur executables executable look correctly search the path with help of sysroot.

Resources