could not find implicit value for evidence parameter of type io.finch.Decode.Json - finch

Has been working on this for a couple of days and still have no clue what's going on:
Got a finch web service, the build.sbt dependencies look like:
"com.github.finagle" %% "finch-circe" % finchVersion changing(),
"com.github.finagle" %% "finch-core" % finchVersion changing(),
"com.github.finagle" %% "finch-jackson" % finchVersion changing(),
"com.github.finagle" %% "finch-test" % finchVersion changing(),
"com.twitter" %% "twitter-server" % "1.28.0",
The finch version is 0.14.0.
And the endpoint looks like:
def makeService(statsReceiver: StatsReceiver): Service[Request, Response] = {
//val getUserCounter = statsReceiver.counter("get_user_counter")
(
MyEndpoint.endpoint1()
:+: SomeEndpoint.deleteEntity()
:+: SomeEndpoint.createEntity()
:+: SomeEndpoint.updateEntity()
) handle {
case e: InvalidClientError => Unauthorized(e)
case e: InvalidContextError => BadRequest(e)
case e: RelevanceError => BadRequest(e)
case e: Exception => InternalServerError(e)
} toService
And I got the error message on the line of "toService" like:
[error] /workplace/relevance-service/src/main/scala/com/company/service/endpoint/serviceEndpoints.scala:39: An Endpoint you're trying to convert into a Finagle service is missing one or more encoders.
[error]
[error] Make sure Exception is one of the following:
[error]
[error] * A com.twitter.finagle.http.Response
[error] * A value of a type with an io.finch.Encode instance (with the corresponding content-type)
[error] * A coproduct made up of some combination of the above
[error]
And I looked at:
https://github.com/finagle/finch/blob/master/docs/src/main/tut/cookbook.md#fixing-the-toservice-compile-error
and tried the lines of:
import io.finch.circe._
And first of all, this io.finch.circe._ is not used in the code since it is grey in the IDE. And I still got the same build error. I'm totally lost here. Anyone can help me out what am I missing here? google/bing around did not get me anything very useful.
Thanks.

Related

Escaping a space in a properties file for WildFly with Oracle DB

I am having a hard time using an environment variable with a space in a properties file read by WildFly (24) in Linux using Oracle 19 in RDS. One like:
SELECT 1 FROM DUAL
The issue is that wildfly won't even parse the file if the spaces are in there with the normal quoting methods.
I have it setup so that variable is in a file called datasource.properties that gets read from standalone.conf where this variable sits:
JAVA_OPTS="$JAVA_OPTS -DDATABASE_CONNECTION_CHECK=${DATABASE_CONNECTION_CHECK}"
It's read in with the following in standalone.conf:
set -a
. /opt/wildfly_config/datasource.properties
set +a
That in turn gets populated in standalone.xml with:
<connection-url>${env.DATABASE_JDBC_URL}</connection-url>
I try putting it in quotes and oddly enough it doesn’t start at all. Standalone.sh is no longer able to parse it:
Error: Could not find or load main class 1 Caused by: java.lang.ClassNotFoundException: 1
I have tried many things such as:
DATABASE_CONNECTION_CHECK="SELECT{ }1{ }FROM{ }DUAL"
DATABASE_CONNECTION_CHECK="'SELECT 1 FROM DUAL'"
DATABASE_CONNECTION_CHECK='SELECT 1 FROM DUAL'
DATABASE_CONNECTION_CHECK="SELECT+1+FROM+DUAL"
DATABASE_CONNECTION_CHECK="SELECT\ 1\ FROM\ DUAL"
DATABASE_CONNECTION_CHECK="\"SELECT 1 FROM DUAL\""
DATABASE_CONNECTION_CHECK="\"'SELECT 1 FROM DUAL'\""
DATABASE_CONNECTION_CHECK="SELECT%201%20FROM%20DUAL"
DATABASE_CONNECTION_CHECK="SELECT\{ }1\{ }FROM\{ }DUAL"
DATABASE_CONNECTION_CHECK='SELECT{ }1{ }FROM{ }DUAL'
DATABASE_CONNECTION_CHECK="'SELECT{ }1{ }FROM{ }DUAL'"
DATABASE_CONNECTION_CHECK="''SELECT{ }1{ }FROM{ }DUAL''"
DATABASE_CONNECTION_CHECK="SELECT%1%FROM%DUAL"
(I realize some of these don't make sense but I was looking for anything different.)
Startup looks good in the log output this with some of these, but then java doesn’t like it, for some reason it sees the escape usage:
Caused by: Error : 936, Position : 9, Sql = SELECT+1+FROM+DUAL, OriginalSql = SELECT+1+FROM+DUAL, Error Msg = ORA-00936: missing expression
or
Caused by: Error : 911, Position : 6, Sql = SELECT%1%FROM%DUAL, OriginalSql = SELECT%1%FROM%DUAL, Error Msg = ORA-00911: invalid character
or
WARN [org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory] (ServerService Thread Pool -- 46) IJ030027: Destroying connection that is not valid, due to the following exception: oracle.jdbc.driver.T4CConnection#2c1456f8: java.sql.SQLException: Non supported SQL92 token at position: 7
This last one is the only one that really netted anything different. I got that with:
DATABASE_CONNECTION_CHECK="SELECT{}1{}FROM{}DUAL"
I can use sed to change the value in the standalone.xml, but all of the other properties I am doing work fine with the exception of this one. I had a hard time with a semicolon in the jdbc string with MSSQL and putting the semicolon in braces like "{;}" fixed that. This DB apparently does not follow the same syntax.
Is there an encoding type that will help this with Oracle and keeps wildfly happy?
EDIT: More tests:
DATABASE_CONNECTION_CHECK=\"SELECT' '1' 'FROM' 'DUAL\"
gets
Caused by: Error : 900, Position : 0, Sql = "SELECT 1 FROM DUAL", OriginalSql = "SELECT 1 FROM DUAL", Error Msg = ORA-00900: invalid SQL statement'
(doesn't seem to like the quotes)
But without the escaping of the quotes I get:
Caused by: Error : 923, Position : 9, Sql = SELECT' '1' 'FROM' 'DUAL, OriginalSql = SELECT' '1' 'FROM' 'DUAL, Error Msg = ORA-00923: FROM keyword not found where expected
A better solution was to change the sourcing of the file from:
set +a
. /opt/PrimeKey/wildfly_config/datasource.properties
set -a
to
. /opt/PrimeKey/wildfly_config/datasource.properties
and make it so all the variables brought in were variables and not properties:
export DATABASE_CONNECTION_CHECK="SELECT 1 FROM DUAL"

How to profile a vim plugin written in python

Vim offers the :profile command, which is really handy. But it is limited to Vim script -- when it comes to plugins implemented in python it isn't that helpful.
Currently I'm trying to understand what is causing a large delay on Denite. As it doesn't happen in vanilla Vim, but only on some specific conditions which I'm not sure how to reproduce, I couldn't find which setting/plugin is interfering.
So I turned to profiling, and this is what I got from :profile:
FUNCTION denite#vim#_start()
Defined: ~/.vim/bundle/denite.nvim/autoload/denite/vim.vim line 33
Called 1 time
Total time: 5.343388
Self time: 4.571928
count total (s) self (s)
1 0.000006 python3 << EOF
def _temporary_scope():
nvim = denite.rplugin.Neovim(vim)
try:
buffer_name = nvim.eval('a:context')['buffer_name']
if nvim.eval('a:context')['buffer_name'] not in denite__uis:
denite__uis[buffer_name] = denite.ui.default.Default(nvim)
denite__uis[buffer_name].start(
denite.rplugin.reform_bytes(nvim.eval('a:sources')),
denite.rplugin.reform_bytes(nvim.eval('a:context')),
)
except Exception as e:
import traceback
for line in traceback.format_exc().splitlines():
denite.util.error(nvim, line)
denite.util.error(nvim, 'Please execute :messages command.')
_temporary_scope()
if _temporary_scope in dir():
del _temporary_scope
EOF
1 0.000017 return []
(...)
FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
1 5.446612 0.010563 denite#helper#call_denite()
1 5.396337 0.000189 denite#start()
1 5.396148 0.000195 <SNR>237_start()
1 5.343388 4.571928 denite#vim#_start()
(...)
I tried to use the python profiler directly by wrapping the main line:
import cProfile
cProfile.run(_temporary_scope(), '/path/to/log/file')
, but no luck -- just a bunch of errors from cProfile. Perhaps it is because the way python is started from Vim, as it is hinted here that it only works on the main thread.
I guess there should be an easier way of doing this.
The python profiler does work by enclosing the whole code,
cProfile.run("""
(...)
""", '/path/to/log/file')
, but it is not that helpful. Maybe it is all that is possible.

Groovy complains unexpected token when evaluating relational operator on decimal values without leading zeroes

I am facing an issue with the execution of following Groovy Script snippet.
GroovyShell sh = new GroovyShell();
sh.evaluate("\"abcd\".length() >= .34");
I am getting the following exceptions. The entire stack trace is mentioned below.
Exception in thread "main" org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script1.groovy: 1: unexpected token: >= # line 1, column 17.
"abcd".length() >= .34d
If I change .34 to 0.34, it works. However, because of some limitation, I won't be able to change the script content.
Any help to overcome will be appreciated.
I am getting the following exceptions
Exception in thread "main" org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script1.groovy: 1: unexpected token: >= # line 1, column 17.
"abcd".length() >= .34d
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
at org.codehaus.groovy.control.ErrorCollector.addFatalError(ErrorCollector.java:150)
at org.codehaus.groovy.control.ErrorCollector.addError(ErrorCollector.java:120)
at org.codehaus.groovy.control.ErrorCollector.addError(ErrorCollector.java:132)
at org.codehaus.groovy.control.SourceUnit.addError(SourceUnit.java:350)
at org.codehaus.groovy.antlr.AntlrParserPlugin.transformCSTIntoAST(AntlrParserPlugin.java:144)
at org.codehaus.groovy.antlr.AntlrParserPlugin.parseCST(AntlrParserPlugin.java:110)
at org.codehaus.groovy.control.SourceUnit.parse(SourceUnit.java:234)
at org.codehaus.groovy.control.CompilationUnit$1.call(CompilationUnit.java:168)
at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:943)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:605)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:584)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:623)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:594)
at groovytest.Testtest.main(Testtest.java:18)
Your Groovy snippet is incorrect - Groovy does not support notation without leading zero in case of decimal numbers smaller than 1.0. If you try to compile following expression directly using groovyc:
"abcd".length() >= .34
compilation will fail with error like:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
test.groovy: 2: Unexpected input: '.' # line 2, column 20.
"abcd".length() >= .34
^
1 error
Java supports such notation, however Groovy from 2.x up to 3.0.0-alpha-3 version does not support it.
Solid solution
Fix the input Groovy code snippet to contain only a valid and compile-ready code. Any invalid Groovy statements or expressions will lead to failures and compilation errors.
Workaround: add leading zeros with replaceAll() method
The only way to compile such incorrect snippet is to replace all .\d+ (dots followed by at least one space and ended with a number) with 0.$1. Consider following example:
def snippet = "\"abcd\".length() >= .34; \"efgh\".length() >= .22; \"xyz\".length() >= 0.11;"
println snippet.replaceAll(' \\.(\\d+)', ' 0.$1')
It adds 0 to all decimal numbers where leading zero is missing. Running this example prints following output to the console:
"abcd".length() >= 0.34; "efgh".length() >= 0.22; "xyz".length() >= 0.11;
If you pass such modified snippet to GroovyShell.evaluate() method it will run with no errors.
Of course this is not a rock-solid solution and it is just a way to automatically fix some of the syntax errors introduced in the code snippet. There are some corner cases where this workaround may cause some side effects, you have to be aware of it.

Pytorch, Unable to get repr for <class 'torch.Tensor'>

I'm implementing some RL in PyTorch and had to write my own mse_loss function (which I found on Stackoverflow ;) ).
The loss function is:
def mse_loss(input_, target_):
return torch.sum(
(input_ - target_) * (input_ - target_)) / input_.data.nelement()
Now, in my training loop, the first input is something like:
tensor([-1.7610e+10]), tensor([-6.5097e+10])
With this input I'll get the error:
Unable to get repr for <class 'torch.Tensor'>
Computing a = (input_ - target_) works fine, while b = a * a respectively b = torch.pow(a, 2) will fail with the error metioned above.
Does anyone know a fix for this?
Thanks a lot!
Update:
I just tried using torch.nn.functional.mse_loss which will result in the same error..
I had the same error,when I use the below code
criterion = torch.nn.CrossEntropyLoss().cuda()
output=output.cuda()
target=target.cuda()
loss=criterion(output, target)
but I finally found my wrong:output is like tensor([[0.5746,0.4254]]) and target is like tensor([2]),the number 2 is out of indice of output
when I not use GPU,this error message is:
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda-bld/pytorch-nightly_1547458468907/work/aten/src/THNN/generic/ClassNLLCriterion.c:93
Are you using a GPU ?
I had simillar problem (but I was using gather operations), and when I moved my tensors to CPU I could get a proper error message. I fixed the error, switched back to GPU and it was alright.
Maybe pytorch has trouble outputing the correct error when it comes from inside the GPU.

Add data into prolog with text

?-dynamic(setup/5).
setup :-
seeing(S),
see('people.txt'),
read_data,
write('data read'),
nl,
seen,
see(S).
read_data :-
read(A),
process(A).
process(A) :- A == end_of_file.
process(A) :-
A \== end_of_file,
write('1'),
read(B),
read(C),
read(D),
read(E),
assertz(person(A,B,C,D,E)),
read_data.
and the text are
john.will.30.london.doctor.
martha.will.33.portsea.doctor.
henry.smith.26.manchester.doctor.
the result is coming out
?- setup.
* Syntax Error
* Syntax Error
* Syntax Error
* Syntax Error
* Syntax Error
data read
yes
What happens? What did I do wrong?
You are reading with read/1 which expects valid Prolog text as input. However, your data is
john.will.30.london.doctor.
which is invalid. Write something like
person(john,will,30,london,doctor).
instead. Most often, people do not read in such data manually. Instead, they load the file with ['datafile.pl'] or other commands.

Resources