Can I decouple the Thermostat UI and temperature control functions? - nest-api

I have a house with three floors plus a finished basement. There are three A/C units and two furnaces. The thermostat units are placed in locations chosen by the installer that are not the rooms we use most. Moving the thermostats would be very expensive.
I have abundant embedded computing power in the house (4 servers are on permanently) and the house currently has two Nest Protect devices. My plan is to install a Nest Protect in each of the rooms we want to have temperature monitored and then work out some way to control the furnace so that the temperature in the rooms being used is being controlled.
This will obviously require the UI functions of the NEST to be decoupled from the control functions in some way. One option would be to drop an Arduino into the system and have that do the actual control but that seems unnecessary when the thermostat is already capable.
I took a look at the Nest documentation but it seems to be based on a model where all the intelligence is happening in the Google cloud. That isn't a model that I am going to tolerate. I don't see the need to connect up to a Google server to mediate communications between devices in my own house. And I didn't see anything about the control functions.

It doesn't look like the Protect provides temperature data. That's unfortunate as if it did you could indeed make this work.
One method would be to place the Protects into groups that correspond to your heating / cooling zones. Whenever a Protect in one of your groups moves out of your target temperature range you could raise or lower the target temperature of the Nest that controls that zone to some extreme, just to get the heating / cooling cycle to turn on. As soon as all of the Protects in the group are back in the target zone, move the Nest back to the target temperature as well.
Nest could support this directly by making the Protect act as a remote temperature sensor, but I'm not sure if the hardware even has this capability, let alone being exposed to the API.
You could still achieve the same effect by adding your own temperature sensors, but that would require additional research into what sensors are available.

Related

Blockchain Application Architecture: UML & Use Cases

For my internship, I need to implement a blockchain based solution to manage a drug supply chain. The management of this supply chain implies to track-and-trace (geolocate) a drug on the chain, but also to monitor the storage temperature to see if the cold chain is respected. For that I created a mock-up of the POC my Dapps (https://balsamiq.cloud/sum5oq5/p8lsped)and also I wanted to prepare myself by doing a UML and a use cases. However, I didn't find a lot of information about blockchain's UML and use cases besides two literatures which were quite different, so I don't know if what I did was correct or not...
The users of my Dapps will be the following ones:
The stakeholders (Manufacturers, Distributors and Retailers) which will use the Dapps to place orders and also monitor them. They also can search in the historic a specific order. Finally, trough IOT sensors they update the conditions of the order (temperature & location).
The administrator which roles is to update the Dapps and its rules. But also to add or delete user while also defining the rights that they have on the blockchain (I intend to use a permisionned blockchain). Finally, they are also here to help in case of technical problem.
The Dapps that I'm thinking about works in the following:
A user, the customer, can place an order (a list of products) to a
certain seller and choose the final destination of the order.
The order is then put together before being shipped or stocked in the
depots of one of the stakeholders (distributor or retailer) with a
description of the stocking and/or shipping condition of the product
(for example the product must be stocked or transported in a room
with a temperature of less than 5°C). During the shipping and
storing, an IOT device will feed the drops with the temperature and
geolocation of the product by updating the data each 5-10mn.
Obviously they will be a function that allows all the users to see
the history of the order passed and search inside a specific order.
In case where the temperature doesn't respect the temperature
recommended, then the smart-contract send an alert. The same if the
collocation of the product is "weird" like being in some European
countries and not in an Asian country, an alert will be sent again by
the smart-contractual. Finally, in the case where the product is sent
to the asked location by the customer, then the money for the order
will be paid to the seller.
So based on what I explained, I came here in hope that someone tell me if the use cases and UML that I did were correct or not.
I thank in advance anybody who'll take the time to help me.

Is this Use Case correct?

Its my first time making a use case and this is for my coursework.
I had to follow the case study below.
Case Study 8: Warehouse Control System (WCS)
A warehouse distributes health food and related products. Customers order a particular
product and quantity from the warehouse. The Warehouse Control System WCS saves the
order and provides to the customer the order number. The WCS generates a pick list and
shopping label, which tells the order-picker person how many of each item to pick to fulfil
the order. The order-picker picks the items, places them in the box, and places the shipping
label on it. The order-picker then uses the WCS to specify whether the order is ready or
not. Then the manager sends the order number, address, and the payment data to the
shipping company. At the end of the day, the shipping company arrives to pick up all the
orders. The inventory of the product in stock is carried out by the staff, but in others, it is
outsourced to an external company. Each staff has a specific function which is either to
raise an order or check the re-order level of the products in stock.
The company wants to create a computer system that allows employees and external
companies to access the application system on desktop. Model, design and implement a
GUI client that can access the database using Visual Studio or any other software
development package. The database must be designed from the class model and the entity
data model using MS Access or Oracle database.
I'm not sure: should the Warehouse Control System (WCS) be an actor ? If not how to make the use case without it?
Here the use case I made:
The WCS is the system under consideration (the blue boundary).
Some observations:
Use verb-subject(-object) to name use cases
Order ready and the like are no use cases
Try to not start functional decomposition (like it seems you did with that Order ready
I recommend to read Bittner/Spence about use cases as usual.

Detecting damaged car parts

I am trying to build a system that on providing an image of a car can assess the damage percentage of it and also find out which parts are damaged in the car.
Is there any possible way to do this using Python and open-cv or tensorflow ?
The GitHub repositories I found that were relevant to my work are these
https://github.com/VakhoQ/damage-car-detector/tree/master/DamageCarDetector
https://github.com/neokt/car-damage-detective
But what they provide is a qualitative output( like they say the car damage is high or low), I wanted to print out a quantitative output( percentage of damage ) along with the individual part names which are damaged
Is this possible ?
If so please help me out.
Thank you.
To extend the good answers given by #yves-daoust: It is not a trivial task and you should not try to do it at once with one single approach.
You should question yourself how a human with a comparable task, i.e. say an expert who reviews these cars after a leasing contract, proceeds with this. Then you have to formulate requirements and also restrictions for your system.
For instance, an expert first checks for any visual occurences and rates these, then they may check technical issues which may well be hidden from optical sensors (i.e. if the car is drivable, driving a round and estimate if the engine is running smoothly, the steering geometry is aligned (i.e. if the car manages to stay in line), if there are any minor vibrations which should not be there and so on) and they may also apply force (trying to manually shake the wheels to check if the bearings are ok).
If you define your measurement system as restricted to just a normal camera sensor, you are somewhat limited within to what extend your system is able to deliver.
If you just want to spot cosmetic damages, i.e. classification of scratches in paint and rims, I'd say a state of the art machine vision application should be able to help you to some extent:
First you'd need to detect the scratches. Bear in mind that visibility of scratches, especially in the field with changing conditions (sunlight) may be a very hard to impossible task for a cheap sensor. I.e. to cope with reflections a system might need to make use of polarizing filters, special effect paints may interfere with your optical system in a way you are not able to spot anything.
Secondly, after you detect the position and dimension of these scratches in the camera coordinates, you need to transform them into real world coordinates for getting to know the real dimensions of these scratches. It would also be of great use to know the exact location of the scratch on the car (which would require a digital twin of the car - which is not to be trivially done anymore).
After determining the extent of the scratch and its position on the car, you need to apply a cost model. Because some car parts are easily fixable, say a scratch in the bumper, just respray the bumper, but scratch in the C-Pillar easily is a repaint for the whole back quarter if it should not be noticeable anymore.
Same goes with bigger scratches / cracks: The optical detection model needs to be able to distinguish between scratches and cracks (which is very hard to do, just by looking at it) and then the cost model can infer the cost i.e. if a bumper needs just respray or needs complete replacement (because it is cracked and not just scratched). This cost model may seem to be easy but bear in mind this needs to be adopted to every car you "scan". Because one cheap damage for the one car body might be a very hard to fix damage for a different car body. I'd say this might even be harder than to spot the inital scratches because you'd need to obtain the construction plans/repair part lists (the repair handbooks / repair part lists are mostly accessible if you are a registered mechanic but they might cost licensing fees) of any vehicle you want to quote.
You see, this is a very complex problem which is composed of multiple hard sub-problems. The easiest or probably the best way to do this would be to do a bottom up approach, i.e. starting with a simple "scratch detector" which just spots scratches in paint. Then go from there and you easily see what is possible and what is not

How to export specific price and volume data from the LMAX level 2 widget to excel

Background -
I am not a programmer.
I do trade spot forex on an intraday basis.
I am willing to learn programming
Specific Query -
I would like to know how to export into Excel in real time 'top of book' price and volume data as displayed on the LMAX level 2 widget/frame on -
https://s3-eu-west-1.amazonaws.com/lmax-widget/website-widget-quote-prof-flex.html?a=rTWcS34L5WRQkHtC
In essence I am looking to export
price and volume data where the coloured flashes occur.
price and volume data for when the coloured flashes do not occur.
I understand that 1) and 2) will encompass all the top of book prices and volume. However i would like to keep 1) and 2) separate/distinguished as far as data collection is concerned.
Time period for which the collected data intends to be stored -> 2-3 hours.
What kind of languages do I need to know to do the above?
I understand that I need to be an advanced excel user too.
Long term goals -
I intend to use the above information to make discretionary intraday trading decisions.
In the long run I will get more involved with creating an algo or indicator to help with the decision making process, which would include the information above.
I have understood that one needs to know coding to get involved in activities such as the above. Hence I have started learning C ++. More so to get a hang/feel for coding.
I have been searching all over the web as to where to start in this endeavor. However I am quite confused and overwhelmed with all the information.
Hence apart from the specific data export query, any additional guidelines would also be helpful.
As of now I use MT4 to trade. Hence I believe to do the above - I will need more than just MT4.
Any help would be highly appreciated.
Yes, MetaTrader4 is still not able ( in spite of all white-label-ed Terminals' OrderBook Add-On(s) marketing and PR efforts ) to provide an OrderBook-L2/DoM-data into your MQL4 / NewMQL4 algorithm for any decision making. Third party software tools' integration is needed to make MQL4-code aware of the real-time L2/DoM-data.
LMAX widget has impressive look & feel, however for your Excel export it requires a lot of programming efforts to re-use it for an automated scanner to produce data for 1 & 2 while there may be some further, non-technical, troubles on legal / operational restrictions for automated scanner to be operated on such data-source. To bring an example, the data-publisher policy restrict automated Options-pricing scanners for options on { FTSE | CAC | AMS | DAX }, may re-visit the online published data-sources no more than once a quarter of an hour and get blocked / black-listed otherwise. So a care and a proper data-source engineering is in place.
Size of data collection is another issue. Excel has some restrictions on an amount of rows/columns that may get imported. Large data-files, the more the CSV-imports may strike these limits. L2/DoM-data, collected for 2-3 hours just for one single FX Major may go beyond such a limit, as there are many records per second ( tens, if not hundreds, with just a few miliseconds between them ). Static file-size of collected data-records take typically several minutes to just get written on disk, so proper distributed processing data-flow-design and non-blocking-fileIO engineering is a must.
Real-time system design is the right angle to view the problem solution approach, rather than just some programming language excersise. Having mastered some programming language is a great move, nevertheless, so called robust real-time system design, and Trading software is such a domain, requires, with all respect, a lot more insights and hands-on experience than to make an MQL4 code run multi-thread-ed & multi-process-ed with a few DLL services for a Cloud/Grid-based distributed processing system.
How much real-time traffic is expected to be there?
For just a raw idea, what the Market can produce per second, per milisecon, per microsecond, let's view a NYNEX traffic analysis for one instrument:
One second can have this wild relief:
And once looking into 5-msec sampling:
How to export
Check if the data-source owner legally permits your automated processing.
Create your own real-time DataPump software, independent of the HTML-wrapped Widget
Create your own 'DB-store' to efficiently off-load scanned data-records from real-time DataPump
Test the live data-source >> DataPump >> DB-store performance & robustness on being able to serve error-free a 24/6 duty for several FX Majors in parallel
Integrate your DataPump fed DB-store local data-source for on-line/off-line interactions with your preferred { MT4 | Excel | quantitative-analytics } package
Integrate a monitoring of any production environment irregularity in your real-time processing pipeline, which may range from network issues, VPN / hosting issues, data-source availability issues to an unexpected change in the scanned data-source format/access conditions.

How do you measure if an interface change improved or reduced usability?

For an ecommerce website how do you measure if a change to your site actually improved usability? What kind of measurements should you gather and how would you set up a framework for making this testing part of development?
Multivariate testing and reporting is a great way to actually measure these kind of things.
It allows you to test what combination of page elements has the greatest conversion rate, providing continual improvement on your site design and usability.
Google Web Optimiser has support for this.
Similar methods that you used to identify the usability problems to begin with-- usability testing. Typically you identify your use-cases and then have a lab study evaluating how users go about accomplishing certain goals. Lab testing is typically good with 8-10 people.
The more information methodology we have adopted to understand our users is to have anonymous data collection (you may need user permission, make your privacy policys clear, etc.) This is simply evaluating what buttons/navigation menus users click on, how users delete something (i.e. changing quantity - are more users entering 0 and updating quantity or hitting X)? This is a bit more complex to setup; you have to develop an infrastructure to hold this data (which is actually just counters, i.e. "Times clicked x: 138838383, Times entered 0: 390393") and allow data points to be created as needed to plug into the design.
To push the measurement of an improvement of a UI change up the stream from end-user (where the data gathering could take a while) to design or implementation, some simple heuristics can be used:
Is the number of actions it takes to perform a scenario less? (If yes, then it has improved). Measurement: # of steps reduced / added.
Does the change reduce the number of kinds of input devices to use (even if # of steps is the same)? By this, I mean if you take something that relied on both the mouse and keyboard and changed it to rely only on the mouse or only on the keyboard, then you have improved useability. Measurement: Change in # of devices used.
Does the change make different parts of the website consistent? E.g. If one part of the e-Commerce site loses changes made while you are not logged on and another part does not, this is inconsistent. Changing it so that they have the same behavior improves usability (preferably to the more fault tolerant please!). Measurement: Make a graph (flow chart really) mapping the ways a particular action could be done. Improvement is a reduction in the # of edges on the graph.
And so on... find some general UI tips, figure out some metrics like the above, and you can approximate usability improvement.
Once you have these design approximations of user improvement, and then gather longer term data, you can see if there is any predictive ability for the design-level usability improvements to the end-user reaction (like: Over the last 10 projects, we've seen an average of 1% quicker scenarios for each action removed, with a range of 0.25% and standard dev of 0.32%).
The first way can be fully subjective or partly quantified: user complaints and positive feedbacks. The problem with this is that you may have some strong biases when it comes to filter those feedbacks, so you better make as quantitative as possible. Having some ticketing system to file every report from the users and gathering statistics about each version of the interface might be useful. Just get your statistics right.
The second way is to measure the difference in a questionnaire taken about the interface by end-users. Answers to each question should be a set of discrete values and then again you can gather statistics for each version of the interface.
The latter way may be much harder to setup (designing a questionnaire and possibly the controlled environment for it as well as the guidelines to interpret the results is a craft by itself) but the former makes it unpleasantly easy to mess up with the measurements. For example, you have to consider the fact that the number of tickets you get for each version is dependent on the time it is used, and that all time ranges are not equal (e.g. a whole class of critical issues may never be discovered before the third or fourth week of usage, or users might tend not to file tickets the first days of use, even if they find issues, etc.).
Torial stole my answer. Although if there is a measure of how long it takes to do a certain task. If the time is reduced and the task is still completed, then that's a good thing.
Also, if there is a way to record the number of cancels, then that would work too.

Resources