I am a big fan of Jax and have wrote a lot of code in it. At the same time, I have a bit of legacy code that I need to run with PyTorch (I am normally using 1.9.0). This hasn’t caused any issues until recently. I was running a rather old version of Jax but decided to update it for some new functionality. This doesn’t seem to work.
Was anyone successful in creating an environment with a version of Jax>2.12 and PyTorch>1.6.0? If so, how?
I think it would be immensely helpful if we could run those together, but whenever Anaconda is trying to solve the environment, I’m getting a very long list of conflicts. I tried several versions of PyTorch, but only Pytorch 0.34 seems to work, which isn’t sufficient for my legacy code.
Any pointers would be appreciated!
I am looking at a two step approach for a optimization problem. My first step is using a MILP formulation of the problem and the second step involves using the solution from the first step as an initial solution but now with a MIQP formulation. I have been able to apply this concept in MATLAB using CPLEX. However, I am now trying the same using CVXPY with CPLEX as the solver. Now I know about the warm_start option but this does not work with the CPLEX solver. I am able to set CPLEX parameters but I am not sure how to initialize my solution. I am thinking of setting the ADVANCE START SWITCH parameter for CPLEX to 1, but now I need to set the initial solution. According to this page: http://www-eio.upc.es/lceio/manuals/cplex-11/html/usrcplex/solveMIP17.html, I need to use the method setVectors in a Concert Technology application, or by using CPXcopymipstart in a Callable Library application to set the initial solution. I am unsure of how to use this along with CVXPY.
The functionality you are looking for does not currently exist in CVXPY. CVXPY is a generic modeling layer that wraps around several solvers and it does not expose the CPLEX-specific CPXreadcopymipstarts nor CPXaddmipstarts functionality.
The fact that setting the value property of variables and using the warm_start option, as suggested in this answer, doesn't work, is a CVXPY issue. It looks like there is an open github issue for this here. In the future, this will likely be the intended solution to your general question.
For now, you'll have to use one of the CPLEX APIs directly. As you mentioned in the comments of this related stackoverflow question, you do not like the idea of using the lower-level CPLEX Python API. That leaves you with docplex as a viable option.
Background
I use an Anaconda environment in Windows 10, made following this post by Mike Müller:
conda create -n keras python=3.6
conda activate keras
conda install keras
This environment has Python 3.6.8, Keras 2.2.4, TensorFlow 1.12.0, and NumPy 1.16.1.
I was working on optimizing code for a team I just joined when I found I can't even run their code. I reduced it to a test case with an MCVE (at least, for me; apologies for not being able to give a testable example):
class TestEvaluation(unittest.TestCase):
def setUp(self):
# In-house function loads inputs and labels properly.
self.inputs, self.labels = load_data()
# Using a pretrained model, known to work.
self.model = keras.models.load_model('model_name.h5')
# Passes, and is loaded successfully.
self.assertIsNotNone(self.model)
def test_model_evaluation(self):
# Fails on my machine, reporting high loss and 0% accuracy.
scores = self.model.evaluate(self.inputs, self.labels)
accuracy = scores[1] * 100
self.assertAlmostEqual(accuracy, 93, delta=5)
Research
This exact scenario runs perfectly fine from someone else's computer, so we've deduced the following: we have the same code, model, and data. Therefore, it should be the environment, right?
I built more Anaconda environments to reproduce the version numbers that work on their machine. However, this didn't fix it. Moreover, this seems to be an issue that not many other people have had, as far I've found by searching online.
I went through the following other environments:
Python 3.6.4, Keras 2.2.4, TensorFlow 1.12.0, NumPy 1.16.2
(The one that worked for someone else, though admittedly without Anaconda)
Python 3.5.2, Keras 2.2.2, TensorFlow 1.10.0, NumPy 1.15.2
Question
The model is pretrained, the validation set is correctly loaded, but Keras fails to report the ~93% accuracy I'm expecting.
How can I fix this issue of getting 0% accuracy?
Update
I've learned a lot more about the situation. I found that installing a Python 3.6 environment on Ubuntu 18.04 got me to random guessing (~25% accuracy). So, it's no longer 0%! Further, I tried to replicate a machine that's been used for testing a lot, which had Ubuntu 16.04.5. This got me to ~46% accuracy. I wasn't able to perfectly replicate it since Ubuntu forced me to update to 16.04.6 when I installed some packages, and I also don't know how they run things on the machine they test with (I tried myself, and it didn't work).
I also learned that the guy who compiled and saved the model was using MacOS High Sierra, but he also gets it to work in the lab environment. I'll need to follow up on that.
Further, I kept searching online and found others with the same issue:
Keras issue #7676 - An open issue for nearly 2 years. The OP reported his saved model works differently on different machines, which sounds a lot like my problem.
Keras issue #4875 - An open issue for over 2 years. This particular comment seems to be the common solution. I'm not sure if this will solve the problem or not, and I don't actually have the code that compiled this model. However, it seems that many people found issues in how their model was built and saved, so I might need to investigate this further...
I apologize for claiming a solution before, I was ecstactic to see that assertNotEqual(accuracy, 0) passed.
Be Aware
I previously wrote an incorrect answer, and this may very well be another poorly-formed solution. Please be aware I haven't fully tested this hypothesis. Also be aware that this is still an open issue in the Keras community and many people have messed things up in a number of ways to arrive at this problem.
Developing Our Solution
Let Person A be the guy who can run the model okay on our lab computers, as well as his MacBook. Let Person B be the one who can't (i.e. me and everyone else).
I got my team to take this problem more seriously. We got to the point where A has a terminal open at a desktop next to B. A runs the test script and gets 92% accuracy. B runs the script and gets 2%. At this point, we were on the same machine using the exact same Python environment and Keras settings (~/.keras). We were also sure that we had the same script, model, and data. Or, so we thought.
I chose to doubt everything at that point. I scp'd the script, model, and data from A's account to B's account. It worked. Here's what that could mean as a solution:
A Guess at the Problem
The files B had were bad. B got them from team storage on Google Drive, as well as Slack. Further, some were delivered by A through his MacBook. The scripts were genuinely the same. The model and data B had actually differed in binary, but had the same size in bytes, looked "similar" in binary, and could've possibly been an encoding issue.
It wasn't Google Drive. I uploaded and re-downloaded the correct files, and nothing went wrong. However, the wrong file was there to begin with.
Possibly Slack? Perhaps Slack was corrupting the encoding when B downloaded A's files.
Possibly it coming from a MacBook? MacOS generates a lot of .DS_Store-like files, and I don't know much about it. MacOS might've played a role in the model and data being OS-dependent. I wouldn't rule it out simply because I'm ignorant of how that OS operates. I heavily suspect this though because I happen to have a spare MacBook, and I got it to work in that environment before we started testing on the same machine.
Worst Case Scenario
We're accepting that we can get the model to work on a single machine that everyone has access to. Does this mean that the model might still not work on other machines? Unfortunately, yes.
We're not taking the time to test other machines after wasting nearly 2 months on this problem. I hope this research and debugging helps someone else out. I didn't want to leave it at "never mind, fixed it."
I am working on a project that was written in FORTRAN95 (using the FTN95 compiler) and VC++6 that was moved to Intel Parallel Studio 15 and Visual Studio 2012 respectively. Since the migration, other features were added, but it was now discovered that an older feature generates NaN values at run-time in the FORTRAN code.
After several hours of debugging the program I am quite sure that the new features are not to blame, mainly because I only found causes of the bug in code which is the same as in the latest working version of the program (compiled with FTN95 & VC++6). Therefore I tend to believe that the issue was caused by the migration itself. Now the problem is that the program compiles without errors. Therefore I'm asking for any ideas on how I could solve this issue.
I know the description is vague at best and I cannot give many details about the bug since it stretches thousands of lines of code (the application is used for scientific calculations). Any ideas would be highly appreciated.
EDIT: Some general details about the bug
The program gets user input from the GUI and initializes corresponding variables in FORTRAN. This stage works as expected and values are passed correctly. After all the desired options have been chosen, the FORTRAN code runs it's calculations and outputs it's results (the NaN values) to a file.
Since I'm a big fan of Dapper and using it for a couple SQL Azure Projects I would like to use on MonoTouch as well against the built-in Mono.Data.SQLite.
I realize that Dapper's speed comes from the dynamic code generation which unfortunately is a big no-no on iOS where everything has to be compiled ahead-of-time by MonoTouch.
First question: Has anyone made any efforts to provide reflection based implementation of the relevant parts of dapper? (I know it will be a LOT slower) If not how hard would it be to implement it (only glanced over the Dapper source).
Second question: I hope I am not sounding naive here but would it be remotely possible to write a little utility that would materialize the dynamically generated IL for your entity POCOs into an IL assembly source file that could be added to your MonoTouch project and thus gets AOTed during build time? Or is this impossible due to joins and QueryMultiple etc?
Note: I realize there is at least one attempt to port Dapper to MonoTouch but glancing over the source I have no idea how's that supposed to fly since all the dynamic method generation stuff is still in there.