In build i have this error
Type mismatch: inferred type is java.time.ZonedDateTime but org.threeten.bp.ZonedDateTime was expected
private suspend fun initWeatherData() {
val lastWeatherLocation = weatherLocationDao.getLocation().value
if(lastWeatherLocation == null
|| locationProvider.hasLocationChanged(lastWeatherLocation)) {
fetchedCurrentWeather()
return
}
if (isFetchCurrentNeeded(lastWeatherLocation.zonedDateTime))
fetchedCurrentWeather()
}
private suspend fun fetchedCurrentWeather() {
weatherNetworkDataSource.fetchCurrentWeather(
locationProvider.getPreferredLocationString(),
Locale.getDefault().language
)
}
private fun isFetchCurrentNeeded(lastFetchTime: ZonedDateTime): Boolean {
val thirtyMinutesAgo = ZonedDateTime.now().minusMinutes(30)
return lastFetchTime.isBefore(thirtyMinutesAgo)
}
enter image description here
The java.util Date-Time API and their formatting API, SimpleDateFormat are outdated and error-prone. It is recommended to stop using them completely and switch to the modern Date-Time API.
Solution using java.time, the modern Date-Time API:
Try Like This
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.LocalTime;
import java.time.ZoneId;
import java.time.ZonedDateTime;
public class Main {
public static void main(String[] args) {
// The given ZonedDateTime
ZonedDateTime zdtAbidjan = ZonedDateTime.of(
LocalDateTime.of(LocalDate.of(2021, 9, 16),
LocalTime.of(12, 0)),
ZoneId.of("Africa/Abidjan")
);
System.out.println(zdtAbidjan);
ZonedDateTime zdtLondon = zdtAbidjan.withZoneSameInstant(ZoneId.of("Europe/London"));
System.out.println(zdtLondon);
}
}
Output
2021-09-16T12:00Z[Africa/Abidjan]
2021-09-16T13:00+01:00[Europe/London]
Online Demo
Related
I am trying to define some different mocked behaviours when a method is called with different parameters. Unfortunately, I find that the second time I try to mock the given method on a (mocked) class, it runs the actual method, causing an exception because the matchers are not valid parameters. Anyone know how I can prevent this?
manager = PowerMockito.mock(Manager.class);
try {
PowerMockito.whenNew(Manager.class).withArguments(anyString(), anyString())
.thenReturn(manager);
} catch (Exception e) {
e.printStackTrace();
}
FindAuthorityDescriptionRequestImpl validFindAuthorityDescription = mock(FindAuthorityDescriptionRequestImpl.class);
PowerMockito.when(manager.createFindAuthorityDescriptionRequest(anyString(), anyString())).thenCallRealMethod();
PowerMockito.when(manager.createFindAuthorityDescriptionRequest(Matchers.eq(VALID_IK),
Matchers.eq(VALID_CATEGORY_NAME))).thenReturn(validFindAuthorityDescription);
PowerMockito.when(manager.processRequest(Matchers.any(FindAuthorityDescriptionRequest.class)))
.thenThrow(ManagerException.class);
PowerMockito.when(manager.processRequest(Matchers.eq(validFindAuthorityDescription)))
.thenReturn(generateValidAuthorityDescriptionResponse());
The following code is a working example based on your mock setup (I've added dummy classes to make it runnable).
The code also contains asserts to verify that the mocked methods return expected values. Also, the real method createFindAuthorityDescriptionRequest is only called once.
Note: This was tested with `powermock 2.0.7` and `mockito 2.21.0`.
If issues persist, I'd suggest checking if the real method is not additionally called from somewhere else in your program (other than the code quoted in your problem statement).
package com.example.stack;
import org.junit.Test;
import org.junit.function.ThrowingRunnable;
import org.junit.runner.RunWith;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThrows;
import static org.mockito.ArgumentMatchers.*;
import static org.powermock.api.mockito.PowerMockito.mock;
#RunWith(PowerMockRunner.class)
#PrepareForTest(fullyQualifiedNames = "com.example.stack.*")
public class StackApplicationTests {
private static final String VALID_IK = "IK";
private static final String VALID_CATEGORY_NAME = "CATEGORY_NAME";
private static final Object VALID_RESPONSE = "RESPONSE";
#Test
public void test() {
Manager manager = mock(Manager.class);
try {
PowerMockito.whenNew(Manager.class).withArguments(anyString(), anyString())
.thenReturn(manager);
} catch (Exception e) {
e.printStackTrace();
}
FindAuthorityDescriptionRequestImpl validFindAuthorityDescription = mock(FindAuthorityDescriptionRequestImpl.class);
PowerMockito.when(manager.createFindAuthorityDescriptionRequest(anyString(), anyString())).thenCallRealMethod();
PowerMockito.when(manager.createFindAuthorityDescriptionRequest(eq(VALID_IK), eq(VALID_CATEGORY_NAME)))
.thenReturn(validFindAuthorityDescription);
PowerMockito.when(manager.processRequest(any(FindAuthorityDescriptionRequest.class)))
.thenThrow(ManagerException.class);
PowerMockito.when(manager.processRequest(eq(validFindAuthorityDescription)))
.thenReturn(VALID_RESPONSE);
// verify that the mock returns expected results
assertEquals(Manager.REAL_RESULT, manager.createFindAuthorityDescriptionRequest("any", "any"));
assertEquals(validFindAuthorityDescription, manager.createFindAuthorityDescriptionRequest("IK", "CATEGORY_NAME"));
assertThrows(ManagerException.class, new ThrowingRunnable(){
#Override
public void run( ) {
manager.processRequest(new FindAuthorityDescriptionRequestImpl());
}
});
assertEquals(VALID_RESPONSE, manager.processRequest(validFindAuthorityDescription));
}
}
interface FindAuthorityDescriptionRequest {}
class FindAuthorityDescriptionRequestImpl implements FindAuthorityDescriptionRequest {}
class ManagerException extends RuntimeException {}
class Manager {
public static FindAuthorityDescriptionRequestImpl REAL_RESULT = new FindAuthorityDescriptionRequestImpl();
public Manager(String s1, String s2) {}
public FindAuthorityDescriptionRequest createFindAuthorityDescriptionRequest(String ik, String category) {
return REAL_RESULT;
}
public Object processRequest(FindAuthorityDescriptionRequest request) {
return null;
}
}
I have the following code:
import groovy.transform.ToString
#ToString(includeNames = true)
class Simple {
String creditPoints
}
Simple simple = new Simple()
simple.with {
creditPoints : "288"
}
println simple
Clearly, I made a mistake here with creditPoints : "288". It should have been creditPoints = "288".
I expected Groovy to fail at the runtime saying that I made a mistake and I should have used creditPoints = "288"but clearly it did not.
Since it did not fail then what did Groovy do with the closure I created?
From the Groovy compiler perspective, there is no mistake in your closure code. The compiler sees creditPoints : "288" as labeled statement which is a legal construction in the Groovy programming language. As the documentation says, label statement does not add anything to the resulting bytecode, but it can be used for instance by AST transformations (Spock Framework uses it heavily).
It becomes more clear and easy to understand if you format code more accurately to the label statement use case, e.g
class Simple {
String creditPoints
static void main(String[] args) {
Simple simple = new Simple()
simple.with {
creditPoints:
"288"
}
println simple
}
}
(NOTE: I put your script inside the main method body to show you its bytecode representation in the next section.)
Now when we know how compiler sees this construction, let's take a look and see what does the final bytecode look like. To do this we will decompile the .class file (I use IntelliJ IDEA for that - you simply open .class file in IDEA and it decompiles it for you):
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
//
import groovy.lang.Closure;
import groovy.lang.GroovyObject;
import groovy.lang.MetaClass;
import groovy.transform.ToString;
import org.codehaus.groovy.runtime.DefaultGroovyMethods;
import org.codehaus.groovy.runtime.GeneratedClosure;
import org.codehaus.groovy.runtime.InvokerHelper;
#ToString
public class Simple implements GroovyObject {
private String creditPoints;
public Simple() {
MetaClass var1 = this.$getStaticMetaClass();
this.metaClass = var1;
}
public static void main(String... args) {
Simple simple = new Simple();
class _main_closure1 extends Closure implements GeneratedClosure {
public _main_closure1(Object _outerInstance, Object _thisObject) {
super(_outerInstance, _thisObject);
}
public Object doCall(Object it) {
return "288";
}
public Object call(Object args) {
return this.doCall(args);
}
public Object call() {
return this.doCall((Object)null);
}
public Object doCall() {
return this.doCall((Object)null);
}
}
DefaultGroovyMethods.with(simple, new _main_closure1(Simple.class, Simple.class));
DefaultGroovyMethods.println(Simple.class, simple);
Object var10000 = null;
}
public String toString() {
StringBuilder _result = new StringBuilder();
Boolean $toStringFirst = Boolean.TRUE;
_result.append("Simple(");
if ($toStringFirst == null ? false : $toStringFirst) {
Boolean var3 = Boolean.FALSE;
} else {
_result.append(", ");
}
if (this.getCreditPoints() == this) {
_result.append("(this)");
} else {
_result.append(InvokerHelper.toString(this.getCreditPoints()));
}
_result.append(")");
return _result.toString();
}
public String getCreditPoints() {
return this.creditPoints;
}
public void setCreditPoints(String var1) {
this.creditPoints = var1;
}
}
As you can see, your closure used with the with method is represented as an inner _main_closure1 class. This class extends Closure class, and it implements GeneratedClosure interface. The body of the closure is encapsulated in public Object doCall(Object it) method. This method only returns "288" string, which is expected - the last statement of the closure becomes a return statement by default. There is no label statement in the generated bytecode, which is also expected as labels get stripped at the CANONICALIZATION Groovy compiler phase.
In Spark, is it possible to have suffix in the path after partition by columns?
For example:
I am write the data to the following path:
/db_name/table_name/dateid=20171009/event_type=TEST/
`dataset.write().partitionBy("event_type").save("/db_name/table_name/dateid=20171009");`
Is it possible to create it to the following with dynamic partition?
/db_name/table_name/dateid=20171009/event_type=TEST/1507764830
It turns out newTaskTempFile is the right place for this. The previous one doesn't work for dynamic partitions.
public String newTaskTempFile(TaskAttemptContext taskContext, Option<String> dir, String ext) {
Option<String> dirWithTimestamp = Option.apply(dir.get() + "/" + timestamp)
return super.newTaskTempFile(taskContext, dirWithTimestamp, ext);
}
//sample json
{"event_type": "type_A", "dateid":"20171009", "data":"garbage" }
{"event_type": "type_B", "dateid":"20171008", "data":"garbage" }
{"event_type": "type_A", "dateid":"20171007", "data":"garbage" }
{"event_type": "type_B", "dateid":"20171006", "data":"garbage" }
// save as partition
spark.read
.json("./data/sample.json")
.write
.partitionBy("dateid", "event_type").saveAsTable("sample")
//result
After reading the source code, the FileOutputCommitter is the way to do this.
SparkSession spark = SparkSession
.builder()
.master("local[2]")
.config("spark.sql.parquet.output.committer.class", "com.estudio.spark.ESParquetOutputCommitter")
.config("spark.sql.sources.commitProtocolClass", "com.estudio.spark.ESSQLHadoopMapReduceCommitProtocol")
.getOrCreate();
ESSQLHadoopMapReduceCommitProtocol.realAppendMode = false;
spark.range(10000)
.withColumn("type", rand()
.multiply(6).cast("int"))
.write()
.mode(Append)
.partitionBy("type")
.format("parquet")
.save("/tmp/spark/test1/");
Here is the customized ParquetOutputCommitter, it's the place to customize the output path. In this case, we're suffix the time-stamp. We have to make sure it's synchronized. Here is the code:
import lombok.extern.slf4j.Slf4j;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.parquet.hadoop.ParquetOutputCommitter;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
#Slf4j
public class ESParquetOutputCommitter extends ParquetOutputCommitter {
private final static Map<String, Path> pathMap = new HashMap<>();
public final static synchronized Path getNewPath(final Path path) {
final String key = path.toString();
log.debug("path.key: {}", key);
if (pathMap.containsKey(key)) {
return pathMap.get(key);
}
final Path newPath = new Path(path, Long.toString(System.currentTimeMillis()));
pathMap.put(key, newPath);
log.info("---> Path: {}, newPath: {}", path, newPath);
return newPath;
}
public ESParquetOutputCommitter(Path outputPath, TaskAttemptContext context) throws IOException {
super(getNewPath(outputPath), context);
log.info("this: {}", this);
}
}
We can also use the getNewPath method to get the customized path. Until now, this will work for SaveMode.Overwrite.
SaveMode.Append is little different, check out here. So, for us to cover Append mode, we need to override SQLHadoopMapReduceCommitProtocol to always return the customized ParquetOutputCommitter. Here is the code:
import lombok.extern.slf4j.Slf4j;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.OutputCommitter;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
import org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol;
import org.apache.spark.sql.internal.SQLConf;
import java.lang.reflect.Constructor;
#Slf4j
public class ESSQLHadoopMapReduceCommitProtocol extends SQLHadoopMapReduceCommitProtocol {
public static boolean realAppendMode = false;
private String jobId;
private String path;
private boolean isAppend;
public ESSQLHadoopMapReduceCommitProtocol(String jobId, String path, boolean isAppend) {
super(jobId, path, isAppend);
this.jobId = jobId;
this.path = path;
this.isAppend = isAppend;
}
#Override
public OutputCommitter setupCommitter(TaskAttemptContext context) {
try {
OutputCommitter committer = context.getOutputFormatClass().newInstance().getOutputCommitter(context);
if (realAppendMode) {
log.info("Using output committer class {}", committer.getClass().getCanonicalName());
return committer;
}
final Configuration configuration = context.getConfiguration();
final String key = SQLConf.OUTPUT_COMMITTER_CLASS().key();
final Class<? extends OutputCommitter> clazz;
clazz = configuration.getClass(key , null, OutputCommitter.class);
if (clazz == null) {
log.info("Using output committer class {}", committer.getClass().getCanonicalName());
return committer;
}
log.info("Using user defined output committer class {}", clazz.getCanonicalName());
if (FileOutputCommitter.class.isAssignableFrom(clazz)) {
Constructor<? extends OutputCommitter> ctor = clazz.getDeclaredConstructor(Path.class, TaskAttemptContext.class);
committer = ctor.newInstance(new Path(path), context);
} else {
Constructor<? extends OutputCommitter> ctor = clazz.getDeclaredConstructor();
committer = ctor.newInstance();
}
return committer;
} catch (Exception e) {
e.printStackTrace();
return super.setupCommitter(context);
}
}
}
Also added a static flag realAppendMode to turn all of this off.
Again, I am not Spark expert yet, let me know if ther
e is anything issue with this solution.
I am trying to override a class DefaultScreenNameValidator that implements ScreenNameValidator interface. For this , I copied the class and put it into another module. One change that I made is in annotation that is as follows:-
#Component(
property = {
"service.ranking:Integer=500"
}
)
I got a successful build using this. But when I tried to deploy the project, I got error as java.lang.NoClassDefFoundError: com/liferay/portal/kernel/security/auth/ScreenNameValidator.Can you suggest me how to eradicate this error. Thanx in advance..
I'm wondering, wouldn't it be better to instead create a module that also implements the ScreenNameValidator interface, and define your custom logic in there? Then you can just simply tell Liferay to use that validator instead of the DefaultScreenNameValidator.
For example, a minimalistic implementation:
import com.liferay.portal.kernel.security.auth.ScreenNameValidator;
import org.osgi.service.component.annotations.Component;
#Component(
immediate = true,
service = ScreenNameValidator.class
)
public class CustomScreenNameValidator implements ScreenNameValidator {
#Override
public boolean validate(long companyId, String screenName) {
// Your custom logic
}
}
make sure you have the dependency to portal-kernel in the build.gradle
dependencies {
compile 'com.liferay.portal:com.liferay.portal.kernel:2.0.0'
I made a screenNameValidator using blade-cli you can see the projet at https://github.com/bruinen/liferay-blade-samples/tree/master/liferay-workspace/modules/blade.screenname.validator
import com.liferay.portal.kernel.security.auth.ScreenNameValidator;
import org.osgi.service.component.annotations.Component;
import java.util.Locale;
#Component(
immediate = true,
property = {"service.ranking:Integer=100"},
service = ScreenNameValidator.class
)
public class CustomScreenNameValidator implements ScreenNameValidator {
#Override
public String getAUIValidatorJS() {
return "function(val) {return !(val.indexOf(\"admin\") !==-1)}";
}
#Override
public String getDescription(Locale locale) {
return "The screenName contains reserved words";
}
#Override
public boolean validate(long companyId, String screenName) {
return !screenName.contains("admin");
}
}
I am new to ASM. I have a class file in which I have runtime visible annotations for methods. I want to parse this class file and select the annotation according to specific criteria. I looked into the documentation of ASM and tried with the visibleAnnotation. I can't seem to print the list of annotations of method which I can see in my class files.
My code is as
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.Iterator;
import org.objectweb.asm.tree.AnnotationNode;
import org.objectweb.asm.tree.ClassNode;
import org.objectweb.asm.tree.MethodNode;
import org.objectweb.asm.ClassReader;
public class ByteCodeParser {
public static void main(String[] args) throws Exception{
InputStream in=new FileInputStream("sample.class");
ClassReader cr=new ClassReader(in);
ClassNode classNode=new ClassNode();
//ClassNode is a ClassVisitor
cr.accept(classNode, 0);
//
Iterator<MethodNode> i = classNode.methods.iterator();
while(i.hasNext()){
MethodNode mn = i.next();
System.out.println(mn.name+ "" + mn.desc);
System.out.println(mn.visibleAnnotations);
}
}
}
The output is:
<clinit>()V
null
<init>()V
null
MyRandomFunction1()V
[org.objectweb.asm.tree.AnnotationNode#5674cd4d]
MyRandomFunction2()V
[org.objectweb.asm.tree.AnnotationNode#63961c42]
My RandomFunction 1 & 2 has annotations but I can't seem to understand [org.objectweb.asm.tree.AnnotationNode#5674cd4d].
I solved this issue myself, I had to iterate over the annotations which I didn't realize initally.
if (mn.visibleAnnotations != null) {
Iterator<AnnotationNode>j=mn.visibleAnnotations.iterator();
while (j.hasNext()) {
AnnotationNode an=j.next();
System.out.println(an.values);
}
}