Sylius V0.17 How to add payment methods - payment

I installed Sylius via composer create-project -s dev sylius/sylius-standard acme on my local server to take a closer look at it for a upcoming project. Now I'm stuck at adding payment-methods/ a payment-gateway.
I tried to follow the docs and install omnipay-bundle but composer require "sylius/omnipay-bundle" failed with:
Problem 1
- Installation request for sylius/omnipay-bundle ^0.9.0 -> satisfiable by sylius/omnipay-bundle[v0.9.0].
- Conclusion: remove omnipay/omnipay 2.3.2
- Conclusion: don't install omnipay/omnipay 2.3.2
- sylius/omnipay-bundle v0.9.0 requires omnipay/omnipay 1.0.* -> satisfiable by omnipay/omnipay[v1.0.0, v1.0.1, v1.0.2, v1.0.3, v1.0.4].
- Can only install one of: omnipay/omnipay[v1.0.0, 2.3.2].
- Can only install one of: omnipay/omnipay[v1.0.1, 2.3.2].
- Can only install one of: omnipay/omnipay[v1.0.2, 2.3.2].
- Can only install one of: omnipay/omnipay[v1.0.3, 2.3.2].
- Can only install one of: omnipay/omnipay[v1.0.4, 2.3.2].
- Installation request for omnipay/omnipay == 2.3.2.0 -> satisfiable by omnipay/omnipay[2.3.2].
Adding the bundle to the appkernel.php anyway and/or adding configuration to config.yml (like described in the docs) prevents the Server from starting.
I found this issue: https://github.com/Sylius/Sylius/issues/4396
which seems related.
Question:
- Should there be some choices other than 'Offline' in 'Payment Methods' in the Admin-Frontend (without adding code to the freshly-pulled sylius)?
- Is Sylius changing so rapidly that the docs are not matching?
- Lets assume I want to add '2checkout' (just as exampl) as payment gateway, what would i have to do?
I have the feeling i missed something fundamental with this problem :)
Thanks for your help in advance!

My Question was answered in the Git-issue 4369
So, the sylius-standard already contains the Symfony2-Bundles.
To add a payment-gateway they just need to be configured in config.yml:
payum:
gateways:
paypal_express_checkout:
paypal_express_checkout_nvp:
username: %paypal.express_checkout.username%
password: %paypal.express_checkout.password%
signature: %paypal.express_checkout.signature%
sandbox: %paypal.express_checkout.sandbox%
klarna_checkout:
klarna_checkout:
secret: 'required'
merchant_id: 'required'
sandbox: true
sylius_payment:
gateways:
paypal_express_checkout: Paypal Express Checkout
klarna_checkout: Klarna Checkout
Additional configuration reference can be found here:
https://github.com/Payum/PayumBundle/blob/master/Resources/doc/configuration_reference.md
Thanks!
PS: The cache has to be cleared before restarting the server.

Related

How to use Django/Nodejs with DDEV

I work a lot with DDEV on my PHP projects and love the features DDEV offers.
Since I also work with Django and NodeJS projects I would like to use them in combination with DDEV. Officially these are not yet supported in the current version (1.18) but maybe someone has already found a solution?
For a quick and dirty answer on django, I'd like to get you started with a simple and probably inadequate approach, but it shows how easy it is to add something like django. We'll just use the django dev server.
Make a directory, I called mine dj and cd dj
ddev config --auto
Add to the .ddev/config.yaml:
webimage_extra_packages: [python3-django]
hooks:
post-start:
- exec: python3 manage.py runserver 0.0.0.0:8000
Add .ddev/docker-compose.django.yaml:
version: "3.6"
services:
web:
expose:
- 8000
environment:
- HTTP_EXPOSE=80:8000
- HTTPS_EXPOSE=443:8000
healthcheck:
test: "true"
ddev start
ddev ssh and create a trivial django project:
django-admin startproject dj .
Add to your dj/settings.py ALLOWED_HOSTS = ["dj.ddev.site"]
Exit back out to the host with ctrl-D or exit and ddev start
You should be able to access the trivial project at https://dj.ddev.site
Note that as you proceed, you'll probably want to end up starting the django server another way, or more likely actually front it by the ddev-webserver nginx server, which would be more natural (as in https://docs.nginx.com/nginx/admin-guide/web-server/app-gateway-uwsgi-django/). But for now, this is a simple demonstration. Happy to help you as you go along.

Use private npm registry for Google App Engine Standard

For all the other stackoverflow questions, it seems like people are asking either about a private npm git repository or about a different technology stack. I'm pretty sure I can use a private npm registry with GAE Flexible, but I was wondering if it was possible with the Standard version?
From the GAE standard docs, doesn't seem like it is possible. Anyone else figure out otherwise?
Google marked this feature request as "won't fix, intended behavior" but there is a workaround.
Presumably you have access to the environment variables during the build stage of your CI/CD pipeline. Begin that stage by having your build script overwrite the .npmrc file using the value of the environment variable (note: the value, not the variable name). The .npmrc file (and the token in it) will then be available to the rest of the CI/CD pipeline.
For example:
- name: Install and build
env:
NPM_AUTH_TOKEN: ${{ secrets.PRIVATE_REPO_PACKAGE_READ_TOKEN }}
run: |
# Remove these 'echo' statements after we migrate off of Google App Engine.
# See replies 14 and 18 here: https://issuetracker.google.com/issues/143810864?pli=1
echo "//npm.pkg.github.com/:_authToken=${NPM_AUTH_TOKEN}" > .npmrc
echo "#organizationname:registry=https://npm.pkg.github.com" >> .npmrc
echo "always-auth=true" >> .npmrc
npm install
npm run compile
npm run secrets:get ${{ secrets.YOUR_GCP_PROJECT_ID }}
Hat tip to the anonymous heroes who wrote replies 14 and 18 in the Issure Tracker thread - https://issuetracker.google.com/issues/143810864?pli=1
If you have a .npmrc file checked in with your project's code, you would be wise to put a comment at the top, explaining that it will be overwritten during the CI/CD pipeline. Otherwise, Murphy's Law dictates that you (or a teammate) will check in a change to that .npmrc file and then waste an unbounded amount of time trying to figure out why that change has no effect during deployment.

How to increase the file limit of GitHub Actions?

I have the follwing error:
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/ runner/work...
I tried all ways to increase the limit (like ulimit -S -n unlimited, sysctl, etc) but seems to not work, neither with sudo
screnshot
My website has a lot of markdown files (~ 80k) used by gatsby to build the final .htmls.
On my machine I need to increase the file limit, of couse, then works. But in the github actions I can't figure out a way to do this.
My github action workflow.yml
name: Build
on: [push, repository_dispatch]
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Increase file limit
run: sudo sysctl -w fs.file-max=65536
- name: Debug
run: ulimit -a
- name: Set Node.js
uses: actions/setup-node#master
with:
node-version: 12.x
- name: Install dependencies
run: npm install
- name: Build
run: npm run build
I think this could be related to this issue: https://github.com/gatsbyjs/gatsby/issues/17321
It sounds like these GitHub/Expo issues might be the problem:
https://github.com/expo/expo-github-action/issues/20
ENOSPC: System limit for number of file watchers reached
https://github.com/expo/expo-cli/issues/277
Handle ENOSPC error (fs.inotify.max_user_watches reached)
Thanks for testing!
I'm afraid this seems to be a GitHub Action
limitation. That docker image is forcing the
fs.inotify.max_user_watches limit to 524288, but apparently GHA is
overwriting this back to 8192. You can see this happen in a fork of
your repo (when we are done, I'll remove the fork ofc, let me know if
you want to have it removed earlier).
Continuing...
Yes, it's related to a limitation of the environment you are running
Expo CLI in. The metro bundler requires a high amount of listeners
apparently. This fails if the host environment is limiting this. So
technically its an environment issue, but I'm not sure if the CLI can
change anything about this.
I find the limit in GitHub Action personally a little low. Like I
tried to outline in an earlier comment on that CLI issue, the
limitation in other CI vendors is actually set to the default max
listeners. Why they did not do this in GH Actions is unclear, that's
what I try to find out. Might be a configurational issue on their
hands, or an intentional limitation.
... And ...
So, there exists a fix, that seemed to work for me when I tried. What
I did was to follow this guys tip: “Increasing the number of
watchers” — #JNYBGR https://link.medium.com/9Zbt3B4pM0
I then did this in my main action.yml With all the specifics
underneath the dev release
steps:
- uses: actions/checkout#v1
- name: Setup kernel for react native, increase watchers
run: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
- name: Run dev release fastlane inside docker action
Please let us know if any of this matches your environment/scenario, and if you find a viable workaround.
UPDATE:
The OP tried fs.inotify.max_user_watches=524288 in his .yaml, and now Gatsby is failing with Error: EMFILE: too many open files open '/home/runner/work/virtualizedfy.gatsby, and NodeJS subsequently crashes with an assertion error:
node[3007]: ../src/spawn_sync.cc:460:v8:Maybe<bool> node:SycProcessRunner::TryInitializeAndRunLoop(v8:Local<v8::Value>): Assertion `{uv_loop_init(vu_loop_ == (0)' failed.
ADDITIONAL SUGGESTION:
https://github.com/gatsbyjs/gatsby/issues/12011
Google seems to suggest https://github.com/isaacs/node-graceful-fs as
a drop in replacement for fs, I might also experiment with that to see
if it makes a difference.
EDIT: I can confirm that monkeypatching fs with graceful-fs at the top
of gatsby-node.js as in the snippet below fixes the issue for me.
const realFs = require('fs')
const gracefulFs = require('graceful-fs')
gracefulFs.gracefulify(realFs)
EDIT2: Actually after upgrading from Node 10 to Node 11 everything
seems to be fine without having to patch fs... So all is well!

CouchDB in CloudFoundry?

I review the Cloud Foundry project and try to install it on a server
I will use Couchdb as a database service.
My principal question is : How use CouchDB in Cloud Foundry?
I install a CF instance with : vcap_dev_setup -c devbox_all.yml -D mydomain.com
The devbox.yml contains :
$ install :
- all.
In this install the couchdb_node and the couchdb_gateway is present by default.
But it seems to be bug in general.
When I delete a app and I have this error for example :
$ vmc delete notes2
Provisioned service [mongodb-d216a] detected, would you like to delete it? [yN]: y
Provisioned service [redis-8fcdc] detected, would you like to delete it? [yN]: y
Deleting application [notes2]: OK
Deleting service [mongodb-d216a]: Error 503: Unexpected response from service gateway
So I tried to install a CF instance with this config.
(A standard single-node with redis, couch and mongo)
conf.yml :
$ jobs:
install:
- nats_server
- router
- stager
- ccdb
- cloud_controller:
builtin_services:
- redis
- mongodb
- couchdb
- health_manager
- dea
- uaa
- uaadb
- redis_node:
index: "0"
- couchdb_node:
index: "0"
- mongodb_node:
index: "0"
- coudb_gateway
- redis_gateway
- mongodb_gateway
First, this config doesn't work, because the option 'couchdb' is not a valable keyword (In the part built-in services)
So, what I do wrong?
Is in the way to integrate couch and it's not finished last week ?
To continue, I success to install the CF instance without the couchdb built-in services option but with a couchdb_node, and a couchdb_gateway. And they start.
I suppose the service is runnable.
But i can't use 'couchdb' in my app manifest.yml or choose this service to bind on.
(It's seems normal because it's not install as a service)
So, It seems to be close to work, but it's not.
I resquest Ideas, Advice on this subject here because I didn't find people talking about around the web.
Thank's to read me.
Lucas
I decided to try this myself and it appears to work OK. I created a new VCAP instance with vcap_dev_setup and the following configuration ..
---
deployment:
name: "cloudfoundry"
jobs:
install:
- nats_server
- cloud_controller:
builtin_services:
- mysql
- postgresql
- couchdb
- stager
- router
- health_manager
- uaa
- uaadb
- ccdb
- dea
- couchdb_gateway
- couchdb_node:
index: "0"
- postgresql_gateway
- postgresql_node:
index: "0"
- mysql_gateway
- mysql_node:
index: "0"
I was able to bind instances of CouchDB to a node app and read the service info from VCAP_SERVICES, as below;
'{"couchdb-1.2":[{"name":"couchdb-c7eb","label":"couchdb-1.2","plan":"free","tags":["key-value","cache","couchdb-1.2","couchdb"],"credentials":{"hostname":"127.0.0.1","host":"127.0.0.1","port":5984,"username":"7f3c0567-89cc-4240-b249-40d1f4586035","password":"8fef9e88-3df2-46a8-a22c-db02b2917251","name":"dde98c69f-01e9-4e97-b0d6-43bed946da95"}}]}'
I was also able to tunnel the service to a local port and connect to it which you can see in this image
What version of Ubuntu have you used to install VCAP?

Puppet not recognising my module

I am trying to create a custom provider for package but for some reasons I keep on getting
err: Could not run Puppet configuration client: Parameter provider
failed: Invalid package provider 'piprs' at
/usr/local/src/ops/services/puppet/modules/test/manifests/init.pp:5
I have added pluginsync=true in puppet.conf in both client and server. I have created the following rb file in module/test/lib/puppet/provider/package/piprs.rb. I am basically trying to create a custom provider for package resource type
#require 'puppet/provider/package'
Puppet::Type.type(:package).provide(:piprs,
:parent => ::Puppet::Provider::Package) do
commands : pip => "/usr/local/bin/pip"
desc "Python packages via `pip`."
def create
pip "freeze"
end
def destroy
end
def exists?
end
end
In the puppet.conf, there is the following source attribute
pluginsource = puppet://puppet/plugins
I am not sure what it is. If you need anymore details, please do post a comment.
First things first - you do realize there is already a Python pip provider in core?
https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/package/pip.rb
If that isn't what you want - then lets move on ...
For starters - try your module without a Puppet master - this is going to be better for development anyway. You need to make sure Ruby can find the library path:
export RUBYLIB=<path_to_module>/lib
Then, try writing a small test in a .pp file:
package { "mypackage": provider => "piprs" }
And run it locally:
puppet apply mytest.pp
This will rule out a code bug in your provider versus a plugin sync issue.
I notice there is a space between the colon and the command - that isn't your problem is it?
commands : pip => "/usr/local/bin/pip"
If you can get this working without a puppetmaster, your problem is sync related.
There are a couple of things that can go wrong - make sure the file is sync'd properly on the client:
ls /var/lib/puppet/lib/puppet/provider/package
You should see the piprs.rb file there. If it is, you may need to make sure your libdir is set correctly:
puppet --configprint libdir
This should point to /var/lib/puppet/lib in most cases.

Resources