Nokia 3310: MIDlet always gives "Can't compile the file" - java-me

After reading this comment proving that it's possible to write custom apps for the new Nokia 3310 3G (TA-1006), I'm trying to get my own app running.
After doing a lot of reading about what MIDP, CLDC etc. are, I installed the Java ME SDK (on a fresh Ubuntu installation since Oracle only supports that or Windows), Eclipse and the Sun Wireless toolkit.
First of, I couldn't find any information on which version of MIDP and CLDC are supported by the device, so I went ahead and tried a few possible permutations, these are my results:
CLDC \ MIDP | 1.0 | 2.0 | 2.1 |
1.0 | * | * | X |
1.1 | * | * | ? |
1.8 | X | X | ? |
The ? ones I have not tried since MIDP 2.1 does not work and there is nothing to be gained and the X ones give an error "Can't install [MIDlet name] because it doesn't work with this phone".
So it seems like the phone supports the MIDP 2.0 profile and CLDC 1.1 configurations, however when I try to install my app (with any of the configurations of *) it always goes like this:
"[MIDlet name] is untrusted. Continue anyway?" > Ok (this was expected)
"Can't compile the file" (here is where I'm stuck)
What I tried so far (besides the various version permutations)
Initially I tried with a very basic MIDlet subtype:
public void startApp()
{
Form form = new Form("Hello");
form.append(new StringItem("Hello", "World!");
Display.getDisplay(this).setCurrent(form);
}
Next, I tried using these templates provided by the Eclipse plugin:
Splash MIDlet Template
Hello World Midlet Template
When selecting the runtime configuration (always picked DefaultColorPhone) I adjusted the version profile from MIDP-2.1 to MIDP-2.0
Tried the other configs MediaControlSkin and QwertyDevice
I always produced the *.jar and .jad files by clicking the "Packaging > Create Package" button in the "Application Descriptor" view.
At some point it became experimenting with various settings which I didn't have much confidence that it'll work, reading up and rinse-repeat. When looking for alternatives, the whole journey became quite frustrating since a lot of links are either on dodgy websites, 404s or for the old 3310 phone.
TL;DR
What configuration and build steps are necessary to get a simple (unsigned) application compiled for the new Nokia 3310?
Here are the full contents of the simplest failing example which imo should work:
$ tree
.
├── Application Descriptor
├── bin
│   └── com
│   └── stackoverflow
│   └── kvn
│   └── test
│   └── SOExample.class
├── build.properties
├── deployed
│   └── DefaultColorPhoneM2.0
│   ├── SOTest.jad
│   └── SOTest.jar
├── res
└── src
└── com
└── stackoverflow
└── kvn
└── test
└── SOExample.java
13 directories, 6 files
$ cat Application\ Descriptor
MIDlet-1: SOExample,,com.stackoverflow.kvn.test.SOExample
MIDlet-Jar-URL: SOTest.jar
MIDlet-Name: SOTest MIDlet Suite
MIDlet-Vendor: MIDlet Suite Vendor
MIDlet-Version: 1.0.0
MicroEdition-Configuration: CLDC-1.1
MicroEdition-Profile: MIDP-2.0
$ cat build.properties
# MTJ Build Properties
DefaultColorPhoneM2.0.includes=src/com/stackoverflow/kvn/test/SOExample.java\
DefaultColorPhoneM2.0.excludes=\
$ cat src/com/stackoverflow/kvn/test/SOExample.java
package com.stackoverflow.kvn.test;
import javax.microedition.lcdui.*;
import javax.microedition.midlet.*;
public class SOExample extends MIDlet {
private Form form;
protected void destroyApp(boolean unconditional)
throws MIDletStateChangeException { /* nop */ }
protected void pauseApp() { /* nop */ }
protected void startApp() throws MIDletStateChangeException {
form = new Form("Hello");
form.append(new StringItem("Hello", "World!"));
Display.getDisplay(this).setCurrent(form);
}
}
Software info of the device: Model: TA-1006; Software: 15.0.0.17.00; OS version: MOCOR_W17.44.3_Release; Firmware number: sc7701_barphone

Related

With AWS CDK Python, how to create a subdirectory, import a .py, and call a method in there?

I am attempting to get the simplest example of creating a S3 bucket with the AWS CDK Python with no luck.
I want to put the code to create the bucket in another file (which file exists in a subdirectory).
What I am doing works with every other Python project I have developed or started.
Process:
I created an empty directory: aws_cdk_python/. I then, inside that directory ran:
$cdk init --language python to layout the structure.
This created another subdirectory with the same name aws_cdk_python/, and created a single .py within that directory where I could begin adding code in the __init__(self) method (constructor)
I was able to add code there to create a S3 bucket.
Now I created a subdirectory, with an __init__.py and a file called: create_s3_bucket.py
I put the code to create a S3 bucket in this file, in a method called 'main'
file: create_s3_bucket.py
def main(self):
<code to create s3 bucket here>
When I run the code, it will create the App Stack with no errors, but the S3 bucket will not be created.
Here is my project layout:
aws_cdk_python/
setup.py
aws_cdk_python/
aws_cdk_python_stack.py
my_aws_s3/
create_s3_bucket.py
setup.py contains the following two lines:
package_dir={"": "aws_cdk_python"},
packages=setuptools.find_packages(where="aws_cdk_python"),
The second line here says to look in the aws_cdk_python/ directory, and search recursively in sub-folders for .py files
In aws_cdk_python_stack.py, I have this line:
from my_aws_s3.create_s3_bucket import CreateS3Bucket
then in __init__ in aws_cdk_python_stack.py, I instantiate the object:
my_aws_s3 = CreateS3Bucket()
and then I make a call like so:
my_aws_s3.main() <== code to create the S3 bucket is here
I have followed this pattern on numerous Python projects before using find_packages() in setup.py
I have also run:
$python -m pip install -r requirements.txt which should pick up the dependencies pointed to in setup.py
Questions:
- Does anyone that uses the AWS CDK Python done this? or have recommendations for code organization?
I do not want all the code for the entire stack to be in aws_cdk_python_stack.py __init__() method.
Any ideas on why there no error displayed in my IDE? All dependencies are resolved, and methods found, but when I run, nothing happens?
How can I see any error messages, no error messages appear with $cdk deploy, it just creates the stack, but not the S3 bucket, even though I have code to call and create a S3 bucket.
This is frustrating, it should work.
I have other sub-directories that I want to create under aws_cdk_python/aws_cdk_python/<dir> , put a __init__.py there (empty file) and import classes in the top level aws_cdk_python_stack.py
any help to get this working would be greatly appreciated.
cdk.json looks like this (laid down from cdk init --language python
{
"app": "python app.py",
"context": {
"#aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"#aws-cdk/core:stackRelativeExports": "true",
"#aws-cdk/aws-ecr-assets:dockerIgnoreSupport": true,
"#aws-cdk/aws-secretsmanager:parseOwnedSecretName": true,
"#aws-cdk/aws-kms:defaultKeyPolicies": true,
"#aws-cdk/aws-s3:grantWriteWithoutAcl": true,
"#aws-cdk/aws-ecs-patterns:removeDefaultDesiredCount": true,
"#aws-cdk/aws-rds:lowercaseDbIdentifier": true,
"#aws-cdk/aws-efs:defaultEncryptionAtRest": true,
"#aws-cdk/aws-lambda:recognizeVersionProps": true,
"#aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true
}
}
app.py looks like this
import os
from aws_cdk import core as cdk
from aws_cdk import core
from aws_cdk_python.aws_cdk_python_stack import AwsCdkPythonStack
app = core.App()
AwsCdkPythonStack(app, "AwsCdkPythonStack",
)
app.synth()
to date: Tue 2021-12-31, this has not been solved
Not entirely sure, but I guess it depends on what your cdk.json file looks like. It contains the command to run for cdk deploy. E.g.:
{
"app": "python main.py", <===== this guy over here assumes the whole app is instantiated by running main.py
"context": {
...
}
}
Since I don't see this entrypoint present in your project structure it might be related to that.
Usually after running cdk init you should at least be able to synthesize. usually in app.py you keep your main App() definition and stack and constructs go in subfolders. Stacks are often instantiated in app.py and the constructs are instantiated in the stack definition files.
I hope it helped you a bit further!
Edit:
Just an example of a working tree is shown below:
aws_cdk_python
├── README.md
├── app.py
├── cdk.json
├── aws_cdk_python
│   ├── __init__.py
│   ├── example_stack.py
│   └── s3_stacks <= this is your subfolder with s3 stacks
│   ├── __init__.py
│   └── s3_stack_definition.py <== file with an s3 stack in it
├── requirements.txt
├── setup.py
└── source.bat
aws_cdk_python/s3_stacks/s3_stack_definition.py:
from aws_cdk import core as cdk
from aws_cdk import aws_s3
class S3Stack(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
bucket = aws_s3.Bucket(self, "MyEncryptedBucket",
encryption=aws_s3.BucketEncryption.KMS
)
app.py:
from aws_cdk import core
from aws_cdk_python.s3_stacks.s3_stack_definition import S3Stack
app = core.App()
S3Stack(app, "ExampleStack",
)
app.synth()

Mono on Linux: Event Logging

I am working on getting C# applications written for Windows to run on Linux using Mono. I am using Mono 5.18.0.240 from the Mono repository, on Ubuntu 18.04.1.
My understanding is that Mono includes a local file-based event logger. By setting the environment variable MONO_EVENTLOG_TYPE to local (followed by an optional path), events are logged to a file-based log. However, logged events seem to not be sorted into the correct source directory that gets created. This makes it so that all events are logged to the same directory, which makes it more difficult to navigate through the files should many events be logged.
Consider this C# program that just logs two events each for three event sources:
using System;
using System.Diagnostics;
namespace EventLogTest
{
class Program
{
public static void Main()
{
var sources = new string[] { "source1", "source2", "source3" };
foreach(var source in sources){
if(! EventLog.SourceExists(source)) EventLog.CreateEventSource(source, "Application");
EventLog log = new EventLog();
log.Source = source;
log.WriteEntry("some event");
log.WriteEntry("another event");
}
}
}
}
We can build the program into an executable and then run it:
$ csc events.cs
Microsoft (R) Visual C# Compiler version 2.8.2.62916 (2ad4aabc)
Copyright (C) Microsoft Corporation. All rights reserved.
$ MONO_EVENTLOG_TYPE=local:./eventlog mono ./events.exe
The resulting structure of the eventlog directory looks like this:
$ tree ./eventlog
./eventlog
└── Application
├── 1.log
├── 2.log
├── 3.log
├── 4.log
├── 5.log
├── 6.log
├── Application
├── source1
├── source2
└── source3
5 directories, 6 files
Note that the directories source1, source2, and source3 were created, but the six log files were placed in the top level Application directory instead of the source directories. If we look at the source field of each log file, we can see that the source is correct:
$ grep -a Source ./eventlog/Application/*.log
eventlog/Application/1.log:Source: source1
eventlog/Application/2.log:Source: source1
eventlog/Application/3.log:Source: source2
eventlog/Application/4.log:Source: source2
eventlog/Application/5.log:Source: source3
eventlog/Application/6.log:Source: source3
My expectation is that the above directory structure should look like this instead, considering that each event log source had two events written (and I don't see the point of the second Application directory):
./eventlog
└── Application
├── source1
│   ├── 1.log
│   └── 2.log
├── source2
│   ├── 1.log
│   └── 2.log
└── source3
├── 1.log
└── 2.log
Now, I know that the obvious solution might be to use a logging solution other than Mono's built-in event logging. However, at this point, it is important that I stick with the built-in tools available.
Is there a way to configure Mono's built-in local event logging to save the events to log files in the relevant source directory, or is this possibly a bug in Mono?

Write dataframe to path outside current directory with a function?

I got a question that relates to (maybe is a duplicate of) this question here.
I try to write a pandas dataframe to an Excel file (non-existing before) in a given path. Since I have to do it quite a few times, I try to wrap it in a function. Here is what I do:
df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]})
def excel_to_path(frame, path):
writer = pd.ExcelWriter(path , engine='xlsxwriter')
frame.to_excel(writer, sheet_name='Output')
writer.save()
excel_to_path(df, "../foo/bar/myfile.xlsx")
I get thrown the error [Errno 2] No such file or directory: '../foo/bar/myfile.xlsx'. How come and how can I fix it?
EDIT : It works as long the defined pathis inside the current working directory. But I'd like to specify any given pathinstead. Ideas?
I usually get bitten by forgetting to create the directories. Perhaps the path ../foo/bar/ doesn't exist yet? Pandas will create the file for you, but not the parent directories.
To elaborate, I'm guessing that your setup looks like this:
.
└── src
├── foo
│   └── bar
└── your_script.py
with src being your working directory, so that foo/bar exists relative to you, but ../foo/bar does not - yet!
So you should add the foo/bar directories one level up:
.
├── foo_should_go_here
│   └── bar_should_go_here
└── src
├── foo
│   └── bar
└── your_script.py

How to split puppet files

Just wondering if I have the following puppet file and I would like to split them into separate files. Do I have to create module? Can't I just include them?
node default {
include mysql
}
class mysql {
# Make sure MySQL is ...
notify {"Mysql":}
# installed
package { 'mysql':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# and running
service { 'mysql':
ensure => running,
enable => true,
}
}
...
I just want to take out mysql class to be on separate file. How to do this simple thing? Btw I'm using masterless puppet
Edit
Big big apologies, the truth is I was only using puppet without vagrant. But since I'm not a devops expert, when there was a revision on my question to include vagrant I just accepted it. Sorry for the confusion and let me revise my question
Can I do the separation WITHOUT vagrant? If I have to so be it.
Thanks
You can move your mysql class into its own module
you'll end up with something like this
.
├── Vagrantfile
├── puppet
| ├── manifests
| ├──── base.pp
| └── modules
| └── mysql
| └── manifests
| └──── init.pp
Vagrantfile would be like
Vagrant.configure("2") do |config|
<make all your configuration here>
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "base.pp"
puppet.module_path = "puppet/modules"
end
end
end
the base.pp file will only contain
node default {
include mysql
}
and your mysql/init.pp file will contain the mysql class itself
class mysql {
# Make sure MySQL is ...
notify {"Mysql":}
# installed
package { 'mysql':
require => Exec['apt-update'], # require 'apt-update' before installing
ensure => installed,
}
# and running
service { 'mysql':
ensure => running,
enable => true,
}
}
It can be a good idea for module exercise in puppet, but honestly you're more likely to use an existing module and not reinvent the wheel: https://forge.puppet.com/puppetlabs/mysql/2.2.3 will be a good module to use

Read all files in a nested folder in Spark

If we have a folder folder having all .txt files, we can read them all using sc.textFile("folder/*.txt"). But what if I have a folder folder containing even more folders named datewise, like, 03, 04, ..., which further contain some .log files. How do I read these in Spark?
In my case, the structure is even more nested & complex, so a general answer is preferred.
If directory structure is regular, lets say something like this:
folder
├── a
│   ├── a
│   │   └── aa.txt
│   └── b
│   └── ab.txt
└── b
├── a
│   └── ba.txt
└── b
└── bb.txt
you can use * wildcard for each level of nesting as shown below:
>>> sc.wholeTextFiles("/folder/*/*/*.txt").map(lambda x: x[0]).collect()
[u'file:/folder/a/a/aa.txt',
u'file:/folder/a/b/ab.txt',
u'file:/folder/b/a/ba.txt',
u'file:/folder/b/b/bb.txt']
Spark 3.0 provides an option recursiveFileLookup to load files from recursive subfolders.
val df= sparkSession.read
.option("recursiveFileLookup","true")
.option("header","true")
.csv("src/main/resources/nested")
This recursively loads the files from src/main/resources/nested and it's subfolders.
if you want use only files which start with name "a" ,you can use
sc.wholeTextFiles("/folder/a*/*/*.txt") or sc.wholeTextFiles("/folder/a*/a*/*.txt")
as well. We can use * as wildcard.
sc.wholeTextFiles("/directory/201910*/part-*.lzo") get all match files name, not files content.
if you want to load the contents of all matched files in a directory, you should use
sc.textFile("/directory/201910*/part-*.lzo")
and setting reading directory recursive!
sc._jsc.hadoopConfiguration().set("mapreduce.input.fileinputformat.input.dir.recursive", "true")
TIPS: scala differ with python, below set use to scala!
sc.hadoopConfiguration.set("mapreduce.input.fileinputformat.input.dir.recursive", "true")

Resources