Visualization of satellite track retrieved with pyephem is off - kml

using the code below, and using pyephem and fastkml, I'd like to extract a ground track of a satellite from a TLE. The code looks as follows:
import numpy as np
import ephem
import datetime as dt
from fastkml import kml
from shapely.geometry import Point, LineString, Polygon
name = "ISS (ZARYA)"
line1 = "1 25544U 98067A 16018.27038796 .00010095 00000-0 15715-3 0 9995"
line2 = "2 25544 51.6427 90.6544 0006335 30.9473 76.2262 15.54535921981506"
tle_rec = ephem.readtle(name, line1, line2)
start_dt = dt.datetime.today()
intervall = dt.timedelta(minutes=1)
timelist = []
for i in range(100):
timelist.append(start_dt + i*intervall)
positions = []
for t in timelist:
tle_rec.compute(t)
positions.append((tle_rec.sublong,tle_rec.sublat,tle_rec.elevation))
k = kml.KML()
ns = '{http://www.opengis.net/kml/2.2}'
p = kml.Placemark(ns, 'Sattrack', 'Test', '100 Minute Track')
p.geometry = LineString(positions)#, tesselate=1,altitudemode="absolute")
k.append(p)
with open("test.kml", 'w') as kmlfile:
kmlfile.write(k.to_string())
Sadly, when I load the kml into Google Earth, the track looks as follows:
Any ideas where this goes wrong?

Your ground track is a loop around the position 0°N (on the Equator) and 0°E (directly south of Greenwich, near the Gulf of Guinea). This suggests that you are using angles expressed in radians, which can reach at most a value of about 6.2, and passing them to mapping software which reads them as degrees.
You should try converting them to degrees first:
positions.append((tle_rec.sublong / ephem.degree,
tle_rec.sublat / ephem.degree,
tle_rec.elevation))

Related

Using python to plot 'Gridded' map

I would like to know how I can create a gridded map of a country(i.e. Singapore) with resolution of 200m x 200m squares. (50m or 100m is ok too)
I would then use the 'nearest neighbour' technique to assign a rainfall data and colour code to each square based on the nearest rainfall station's data.
[I have the latitude,longitude & rainfall data for all the stations for each date.]
Then, I would like to store the data in an Array for each 'gridded map' (i.e. from 1-Jan-1980 to 31-Dec-2021)
Can this be done using python?
P.S Below is a 'simple' version I did as an example to how the 'gridded' map should look like for 1 particular day.
https://i.stack.imgur.com/9vIeQ.png
Thank you so much!
Can this be done using python? YES
I have previously provided a similar answer binning spatial dataframe. Reference that also for concepts
you have noted that you are working with Singapore geometry and rainfall data. To setup an answer I have sourced this data from government sources
for purpose on answer I have used 2kmx2km grid so when plotting to demonstrate answer resource utilisation is reduced
core concept: create a grid of box polygons that cover the total bounds of the geometry. Note it's important to use UTM CRS here so that bounds in meters make sense. Once boxes are created remove boxes that are within total bounds but do not intersect with actual geometry
next create a geopandas dataframe of rainfall data. Use longitude and latitude of weather station to create points
final step, join_nearest() grid geometry with rainfall data geometry and data
clearly this final data frame gdf_grid_rainfall is a data frame, which is effectively an array. You can use as an array as you please ...
have provided a folium and plotly interactive visualisations that demonstrate clearly solution is working
solution
Dependent on data sourcing
# number of meters
STEP = 2000
a, b, c, d = gdf_sg.to_crs(gdf_sg.estimate_utm_crs()).total_bounds
# create a grid for Singapore
gdf_grid = gpd.GeoDataFrame(
geometry=[
shapely.geometry.box(minx, miny, maxx, maxy)
for minx, maxx in zip(np.arange(a, c, STEP), np.arange(a, c, STEP)[1:])
for miny, maxy in zip(np.arange(b, d, STEP), np.arange(b, d, STEP)[1:])
],
crs=gdf_sg.estimate_utm_crs(),
).to_crs(gdf_sg.crs)
# restrict grid to only squares that intersect with Singapore geometry
gdf_grid = (
gdf_grid.sjoin(gdf_sg)
.pipe(lambda d: d.groupby(d.index).first())
.set_crs(gdf_grid.crs)
.drop(columns=["index_right"])
)
# geodataframe of weather station locations and rainfall by date
gdf_rainfall = gpd.GeoDataFrame(
df_stations.merge(df, on="id")
.assign(
geometry=lambda d: gpd.points_from_xy(
d["location.longitude"], d["location.latitude"]
)
)
.drop(columns=["location.latitude", "location.longitude"]),
crs=gdf_sg.crs,
)
# weather station to nearest grid
gdf_grid_rainfall = gpd.sjoin_nearest(gdf_grid, gdf_rainfall).drop(
columns=["Description", "index_right"]
)
# does it work? let's visualize with folium
gdf_grid_rainfall.loc[lambda d: d["Date"].eq("20220622")].explore("Rainfall (mm)", height=400, width=600)
data sourcing
import requests, itertools, io
from pathlib import Path
import urllib
from zipfile import ZipFile
import fiona.drvsupport
import geopandas as gpd
import numpy as np
import pandas as pd
import shapely.geometry
# get official Singapore planning area geometry
url = "https://geo.data.gov.sg/planning-area-census2010/2014/04/14/kml/planning-area-census2010.zip"
f = Path.cwd().joinpath(urllib.parse.urlparse(url).path.split("/")[-1])
if not f.exists():
r = requests.get(url, stream=True, headers={"User-Agent": "XY"})
with open(f, "wb") as fd:
for chunk in r.iter_content(chunk_size=128):
fd.write(chunk)
zfile = ZipFile(f)
zfile.extractall(f.stem)
fiona.drvsupport.supported_drivers['KML'] = 'rw'
gdf_sg = gpd.read_file(
[_ for _ in Path.cwd().joinpath(f.stem).glob("*.kml")][0], driver="KML"
)
# get data about Singapore weather stations
df_stations = pd.json_normalize(
requests.get("https://api.data.gov.sg/v1/environment/rainfall").json()["metadata"][
"stations"
]
)
# dates to get data from weather.gov.sg
dates = pd.date_range("20220601", "20220730", freq="MS").strftime("%Y%m")
df = pd.DataFrame()
# fmt: off
bad = ['S100', 'S201', 'S202', 'S203', 'S204', 'S205', 'S207', 'S208',
'S209', 'S211', 'S212', 'S213', 'S214', 'S215', 'S216', 'S217',
'S218', 'S219', 'S220', 'S221', 'S222', 'S223', 'S224', 'S226',
'S227', 'S228', 'S229', 'S230', 'S900']
# fmt: on
for stat, month in itertools.product(df_stations["id"], dates):
if not stat in bad:
try:
df_ = pd.read_csv(
io.StringIO(
requests.get(
f"http://www.weather.gov.sg/files/dailydata/DAILYDATA_{stat}_{month}.csv"
).text
)
).iloc[:, 0:5]
except pd.errors.ParserError as e:
bad.append(stat)
print(f"failed {stat} {month}")
df = pd.concat([df, df_.assign(id=stat)])
df["Rainfall (mm)"] = pd.to_numeric(
df["Daily Rainfall Total (mm)"], errors="coerce"
)
df["Date"] = pd.to_datetime(df[["Year","Month","Day"]]).dt.strftime("%Y%m%d")
df = df.loc[:,["id","Date","Rainfall (mm)", "Station"]]
visualisation using plotly animation
import plotly.express as px
# reduce dates so figure builds in sensible time
gdf_px = gdf_grid_rainfall.loc[
lambda d: d["Date"].isin(
gdf_grid_rainfall["Date"].value_counts().sort_index().index[0:15]
)
]
px.choropleth_mapbox(
gdf_px,
geojson=gdf_px.geometry,
locations=gdf_px.index,
color="Rainfall (mm)",
hover_data=gdf_px.columns[1:].tolist(),
animation_frame="Date",
mapbox_style="carto-positron",
center={"lat":gdf_px.unary_union.centroid.y, "lon":gdf_px.unary_union.centroid.x},
zoom=8.5
).update_layout(margin={"r": 0, "t": 0, "l": 0, "b": 0, "pad": 4})

geopandas doesn't find point in polygon even though it should?

I have some lat/long coordinates and need to confirm if they are with the city of Atlanta, GA. I'm testing it out but it doesn't seem to work.
I got a geojson from here which appears to be legit:
https://gis.atlantaga.gov/?page=OPEN-DATA-HUB
import pandas as pd
import geopandas
atl = geopandas.read_file('Official_City_Boundary.geojson')
atl['geometry'] # this shows the image of Atlanta which appears correct
I plug in a couple of coordinates I got from Google Maps:
x = [33.75865421788594, -84.43974601192079]
y = [33.729117878816, -84.4017757998275]
z = [33.827871937500255, -84.39646813516548]
df = pd.DataFrame({'latitude': [x[0], y[0], z[0]], 'longitude': [x[1], y[1], z[1]]})
geometry = geopandas.points_from_xy(df.longitude, df.latitude)
points = geopandas.GeoDataFrame(geometry=geometry)
points
geometry
0 POINT (-84.43975 33.75865)
1 POINT (-84.40178 33.72912)
2 POINT (-84.39647 33.82787)
But when I check if the points are in the boundary, only one is true:
atl['geometry'].contains(points)
0 True
1 False
2 False
Why are they not all true? Am I doing it wrong?
I found some geometry similar to what you refer to
an alternative approach is to use intersects() to find the contains relationship. NB use of unary_union as the Atlanta geometry I downloaded contains multiple polygons
import pandas as pd
import geopandas
from pathlib import Path
atl = geopandas.read_file(Path.home().joinpath("Downloads").joinpath('Official_City_Council_District_Boundaries.geojson'))
atl['geometry'] # this shows the image of Atlanta which appears correct
x = [33.75865421788594, -84.43974601192079]
y = [33.729117878816, -84.4017757998275]
z = [33.827871937500255, -84.39646813516548]
df = pd.DataFrame({'latitude': [x[0], y[0], z[0]], 'longitude': [x[1], y[1], z[1]]})
geometry = geopandas.points_from_xy(df.longitude, df.latitude)
points = geopandas.GeoDataFrame(geometry=geometry, crs="epsg:4326")
points.intersects(atl.unary_union)
0 True
1 True
2 True
dtype: bool
As it is said in documentation:
It does not check if an element of one GeoSeries contains any element
of the other one.
So you should use a loop to check all points.

Changing the values of a dict in lowercase ( values are code colors ) to be accepted as a color parametrer in plotly.graph.object

So, I'm trying to get the colors from the dictionary 'Disaster_type' to draw the markers in geoscatters depending of the type of disaster.
Basically, I want to reprensent in the graphic the natural diasasters with it's color code. eg; it's is a volcanic activity paint it 'orange'. I want to change the size of the marker as well depending of the magnitude of the disaster, but that's for another day.
here's the link of the dataset: https://www.kaggle.com/datasets/brsdincer/all-natural-disasters-19002021-eosdis
import plotly.graph_objects as go
import pandas as pd
import plotly as plt
df = pd.read_csv('1900_2021_DISASTERS - main.csv')
df.head()
df.tail()
disaster_set = {disaster for disaster in df['Disaster Type']}
disaster_type = {'Storm':'aliceblue',
'Volcanic activity':'orange',
'Flood':'royalblue',
'Mass movement (dry)':'darkorange',
'Landslide':'#C76114',
'Extreme temperature':'#FF0000',
'Animal accident':'gray55',
'Glacial lake outburst':'#7D9EC0',
'Earthquake':'#CD8C95',
'Insect infestation':'#EEE8AA',
'Wildfire':' #FFFF00',
'Fog':'#00E5EE',
'Drought':'#FFEFD5',
'Epidemic':'#00CD66 ',
'Impact':'#FF6347'}
# disaster_type_lower = {(k, v.lower()) for k, v in disaster_type.items()}
# print(disaster_type_lower)
# for values in disaster_type.values():
# disaster_type[values] = disaster_type.lowercase()
fig = go.Figure(data=go.Scattergeo(
lon = df['Longitude'],
lat = df['Latitude'],
text = df['Country'],
mode = 'markers',
marker_color = disaster_type_.values()
)
)
fig.show()
I cant figure how, I've left in comments after the dict how I tried to do that.
It changes them to lowercase, but know I dont know hot to get them...My brain is completly melted
it's a simple case of pandas map
found data that appears same as yours on kaggle so have used that
one type is unmapped Extreme temperature so used a fillna("red") to remove any errors
gray55 gave me an error so replaced it with RGB equivalent
import kaggle.cli
import sys
import pandas as pd
from zipfile import ZipFile
import urllib
import plotly.graph_objects as go
# fmt: off
# download data set
url = "https://www.kaggle.com/brsdincer/all-natural-disasters-19002021-eosdis"
sys.argv = [sys.argv[0]] + f"datasets download {urllib.parse.urlparse(url).path[1:]}".split(" ")
kaggle.cli.main()
zfile = ZipFile(f'{urllib.parse.urlparse(url).path.split("/")[-1]}.zip')
dfs = {f.filename: pd.read_csv(zfile.open(f)) for f in zfile.infolist()}
# fmt: on
df = dfs["DISASTERS/1970-2021_DISASTERS.xlsx - emdat data.csv"]
disaster_type = {
"Storm": "aliceblue",
"Volcanic activity": "orange",
"Flood": "royalblue",
"Mass movement (dry)": "darkorange",
"Landslide": "#C76114",
"Extreme temperature": "#FF0000",
"Animal accident": "#8c8c8c", # gray55
"Glacial lake outburst": "#7D9EC0",
"Earthquake": "#CD8C95",
"Insect infestation": "#EEE8AA",
"Wildfire": " #FFFF00",
"Fog": "#00E5EE",
"Drought": "#FFEFD5",
"Epidemic": "#00CD66 ",
"Impact": "#FF6347",
}
fig = go.Figure(
data=go.Scattergeo(
lon=df["Longitude"],
lat=df["Latitude"],
text=df["Country"],
mode="markers",
marker_color=df["Disaster Type"].map(disaster_type).fillna("red"),
)
)
fig.show()

Keithley2400_IV Sweep_VIA RS232 - I'd like to increase the size of ':FETCh?'

I'm Kwon, an engineering student. I'm currently producing IV Sweep from keithley2400 products and from Python through rs232.
While looking at the manual, I was trying to compensate for various errors and hit a dead end. I'm going to draw a graph with matplotlib, but the number of xvalues and yvalues is not correct.
After several attempts, I found that 'yvalues' were fixed to a size of 5.
(The graph came out well when each size was adjusted to 5.)
The contents of the manual are as follows.
"You can specify from one to all five elements."
Please help me to increase the size of ':FETCh?' from 5 so that I can draw a graph that connects the steps I put in. Thank you for reading the long question.
import sys
startv = sys.argv[1]
stopv = sys.argv[2]
stepv = sys.argv[3]
filename = sys.argv[4]
startvprime = float(startv)
stopvprime = float(stopv)
stepvprime = float(stepv)
steps = (stopvprime - startvprime) / stepvprime + 1
# Import PyVisa and choose RS-232 as Drain-Source
import pyvisa, time
import serial
rm = pyvisa.ResourceManager()
rm.list_resources()
with rm. open_resource('COM3') as Keithley:
Keithley.port = 'COM3'
Keithley.baudrate = 9600
Keithley.timeout = 25000
Keithley.open()
Keithley.read_termination = '\r'
Keithley.write_termination = '\r'
Keithley.write("*RST")
Keithley.write("*IDN?")
Keithley.write(":SENS:FUNC:CONC OFF")
Keithley.write(":SOUR:FUNC VOLT")
Keithley.write(":SENS:FUNC 'CURR:DC' ")
Keithley.write(":SOUR:VOLT:START ", startv)
Keithley.write(":SOUR:VOLT:STOP ", stopv)
Keithley.write(":SOUR:VOLT:STEP ", stepv)
Keithley.write(":SOUR:SWE:RANG AUTO")
Keithley.write(":SENS:CURR:PROT 0.1")
Keithley.write(":SOUR:SWE:SPAC LIN")
Keithley.write(":SOUR:SWE:POIN", str(int(steps)))
Keithley.write(":SOUR:SWE:DIR UP")
Keithley.write(":TRIG:COUN", str(int(steps)))
Keithley.write(":FORM:ELEM CURR")
Keithley.write(":SOUR:VOLT:MODE SWE")
Keithley.write(":OUTP ON")
import numpy as np
result = Keithley.query(":READ?")
yvalues = Keithley.query_ascii_values(":FETCh?")
Keithley.write(":OUTP OFF")
Keithley.write(":SOUR:VOLT 0")
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from scipy import stats
xvalues = np.arange(startvprime, stopvprime+1, stepvprime)
plt.plot(xvalues, yvalues)
plt.xlabel(' Drain-Source Voltage (V)')
plt.ylabel(' Drain-Source Current (mA)')
plt.title('IV Curve')
plt.show()
np.savetxt(filename, (xvalues,yvalues))
error ex) python name.py -10 10 1 savename
=> ValueError: x and y must have same first dimension, but have shapes (21,) and (5,)
What is yvalues in your code? I think yvalues's type is string because of pyvisa's query_ascii_values.
yvalues = [float(i) for i in Keithley.query_ascii_values(":FETCh?")]
Also, check 'steps' value.

Separate Spam and Ham for WordCloud Visualization

I am performing spam detection and want to visualize spam and ham keywords separately in Wordcloud. Here's my .csv file.
data = pd.read_csv("spam.csv",encoding='latin-1')
data = data.rename(columns = {"v1":"label", "v2":"message"})
data = data.replace({"spam":"1","ham":"0"})
Here's my code for WordCloud. I need help with spam_words. I cannot generate the right graph.
import matplotlib.pyplot as plt
from wordcloud import WordCloud
spam_words = ' '.join(list(data[data['label'] == 1 ]['message']))
spam_wc = WordCloud(width = 512, height = 512).generate(spam_words)
plt.figure(figsize = (10,8), facecolor = 'k')
plt.imshow(spam_wc)
plt.axis('off')
plt.tight_layout(pad = 0)
plt.show()
The issue is that the current code replaces "spam" and "ham" with the one-character strings "1" and "0", but you filter the DataFrame based on comparison with the integer 1. Change the replace line to this:
data = data.replace({"spam": 1, "ham": 0})

Resources