SVG SMIL animation additive morphing path - svg

I have a fairly simple svg morph animation using SMIL where the path morphs when elements are clicked within the svg.
I'd like the path to morph from the most recent version of it but instead it goes back to the original path each time. Ideally the blob shapes would go up and down (undulate) as each link is clicked.
I've tried additive="replace" which sounds like it's the correct attribute to use but it doesn't work.
Anyone know the correct combination to start the animation from the last point?
<svg viewBox="0 0 1196.87 254" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g fill="#64bab7" font-family="Helvetica" font-size="25">
<a xlink:href="#link1" class="loop-tab"><text id="link1" transform="translate(62.5 111.96)"><tspan letter-spacing="-.07em">1</tspan><tspan letter-spacing="-.05em" x="12.37" y="0">.</tspan><tspan x="18.47" y="0"/><tspan letter-spacing="-.01em" x="24.22" y="0">Link</tspan></text></a>
<a xlink:href="#link2" class="loop-tab"><text id="link2" transform="translate(305.35 111.96)">2. Link</text></a>
<a xlink:href="#link3" class="loop-tab"><text id="link3" transform="translate(548.57 111.96)">3. Link</text></a>
<a xlink:href="#link4" class="loop-tab"><text id="link4" transform="translate(792.92 111.96)">4. Link</text></a>
<a xlink:href="#link5" class="loop-tab"><text id="link5" transform="translate(1047.39 111.96)">5. Link</text></a>
</g>
<path fill="none" stroke="#6dc4c2" stroke-miterlimit="10" stroke-width="2.04" d="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z">
<animate begin="link1.click" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" attributeName="d" dur="0.5s"
values="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z;
m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-55.66 148.77-127 148.77c-63.68 0-101.91-94.2-101.91-148.8 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link2.click" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" attributeName="d" dur="0.5s"
values="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z;
m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 148.59 122.81 148.59 92.13-97.46 122.84-148.68 61.45-102.32 122.84-102.32h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link3.click" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" attributeName="d" dur="0.5s"
values="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z;
m598.43 252c-61.43 0-92.13-97.43-122.88-148.65s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 148.65-122.89 148.65z"/>
<animate begin="link4.click" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" attributeName="d" dur="0.5s"
values="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z;
m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 148.67 122.84 148.67 92.09-97.38 122.8-148.6 65.59-102.4 127.04-102.4c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link5.click" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" attributeName="d" dur="0.5s"
values="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z;
m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.69 148.63-101.94 148.63-61.42 0-96.28-97.38-127-148.6s-61.44-102.41-122.86-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
</path>
</svg>

Have worked this out now.
<svg viewBox="0 0 1196.87 254" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<g fill="#64bab7" font-family="Helvetica" font-size="25">
<a xlink:href="#link1" class="loop-tab"><text id="link1" transform="translate(62.5 111.96)"><tspan letter-spacing="-.07em">1</tspan><tspan letter-spacing="-.05em" x="12.37" y="0">.</tspan><tspan x="18.47" y="0"/><tspan letter-spacing="-.01em" x="24.22" y="0">Link</tspan></text></a>
<a xlink:href="#link2" class="loop-tab"><text id="link2" transform="translate(305.35 111.96)">2. Link</text></a>
<a xlink:href="#link3" class="loop-tab"><text id="link3" transform="translate(548.57 111.96)">3. Link</text></a>
<a xlink:href="#link4" class="loop-tab"><text id="link4" transform="translate(792.92 111.96)">4. Link</text></a>
<a xlink:href="#link5" class="loop-tab"><text id="link5" transform="translate(1047.39 111.96)">5. Link</text></a>
</g>
<path fill="none" stroke="#6dc4c2" stroke-miterlimit="10" stroke-width="2.04" d="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z">
<animate begin="link1.click" attributeName="d" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" dur="0.5s"
to="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-55.66 148.77-127 148.77c-63.68 0-101.91-94.2-101.91-148.8 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link2.click" attributeName="d" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" dur="0.5s"
to="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 148.59 122.81 148.59 92.13-97.46 122.84-148.68 61.45-102.32 122.84-102.32h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link3.click" attributeName="d" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" dur="0.5s"
to="m598.43 252c-61.43 0-92.13-97.43-122.88-148.65s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 148.65-122.89 148.65z"/>
<animate begin="link4.click" attributeName="d" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" dur="0.5s"
to="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 148.67 122.84 148.67 92.09-97.38 122.8-148.6 65.59-102.4 127.04-102.4c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.68 102.4-101.92 102.4-61.42 0-96.3-51.15-127-102.37s-61.46-102.41-122.88-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
<animate begin="link5.click" attributeName="d" repeatCount="1" fill="freeze" accumulate="sum" additive="replace" dur="0.5s"
to="m598.43 205.8c-61.42 0-92.13-51.23-122.88-102.45s-61.38-102.35-122.76-102.35-92.13 51.24-122.88 102.43-65.55 102.37-127 102.37c-56.22 0-101.91-47.8-101.91-102.4 0-61.84 51.61-102.4 101.9-102.4 61.42 0 96.3 51.22 127 102.41s61.43 102.39 122.81 102.39 92.13-51.23 122.84-102.45 61.45-102.35 122.84-102.35h.09c61.42 0 92.13 51.15 122.84 102.33s61.42 102.47 122.84 102.47 92.09-51.15 122.84-102.37 65.55-102.43 127-102.43c50.29 0 101.88 40.54 101.88 102.38 0 54.55-45.69 148.63-101.94 148.63-61.42 0-96.28-97.38-127-148.6s-61.44-102.41-122.86-102.41-92.08 51.17-122.76 102.35-61.47 102.45-122.89 102.45z"/>
</path>
</svg>

Related

Is there a way to (conditionally) forward fill values in a Pandas DF in a vectorized way based on multiple criteria?

In the below dataframe, I'm trying to forward fill the Pos and Stop columns based on the following criteria:
If (Prior Pos == -1) & (Current High < Prior Stop)
If (Prior Pos == 1 ) & (Current Low > Prior Stop)
Once either of these conditions is violated, then the values should remain unchanged until the next non-zero instance of Pos and Stop at which point the above criteria should again be evaluated.
import numpy as np
import pandas as pd
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
df
Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1 122.86
2022-01-04 125.23 127.72 125.10 127.31 0 0.00
2022-01-05 127.82 128.00 125.21 125.43 -1 128.00
2022-01-06 126.01 127.94 125.94 127.10 0 0.00
2022-01-07 127.82 128.32 126.32 126.90 0 0.00
2022-01-08 126.96 127.39 126.42 126.85 0 0.00
2022-01-09 126.44 127.64 125.08 125.28 0 0.00
2022-01-10 125.57 125.80 124.55 124.61 -1 125.80
2022-01-11 125.08 125.35 123.94 124.28 0 0.00
2022-01-12 124.28 125.24 124.05 125.06 1 124.05
2022-01-13 124.68 124.85 123.13 123.54 0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1 123.85
2022-01-15 126.17 126.32 124.83 125.90 0 0.00
2022-01-16 126.60 128.46 126.21 126.74 0 0.00
2022-01-17 127.21 127.75 126.52 127.13 0 0.00
The desired result is:
Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1 122.86
2022-01-04 125.23 127.72 125.10 127.31 1 122.86
2022-01-05 127.82 128.00 125.21 125.43 -1 128.00
2022-01-06 126.01 127.94 125.94 127.10 -1 128.00
2022-01-07 127.82 128.32 126.32 126.90 0 0.00
2022-01-08 126.96 127.39 126.42 126.85 0 0.00
2022-01-09 126.44 127.64 125.08 125.28 0 0.00
2022-01-10 125.57 125.80 124.55 124.61 -1 125.80
2022-01-11 125.08 125.35 123.94 124.28 -1 125.80
2022-01-12 124.28 125.24 124.05 125.06 1 124.05
2022-01-13 124.68 124.85 123.13 123.54 0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1 123.85
2022-01-15 126.17 126.32 124.83 125.90 1 123.85
2022-01-16 126.60 128.46 126.21 126.74 1 123.85
2022-01-17 127.21 127.75 126.52 127.13 1 123.85
I've tried using the groupby and where mathods which produce a df that is close to the desired but does not keep the values unchanged for the subsequent rows in a group once the the criteria is breached.
s = df[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
grouped = s.groupby(['Pos','Stop'])
df.update(grouped.apply(lambda g: g.where((s['Pos'] == 1) & (s['Stop'] <= df['Low']) | (s['Pos'] == -1) & (s['Stop'] >= df['High'])))
df
Open High Low Close Pos Stop
Date
2022-01-01 126.82 126.93 125.17 126.27 0.0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0.0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1.0 122.86
2022-01-04 125.23 127.72 125.10 127.31 1.0 122.86
2022-01-05 127.82 128.00 125.21 125.43 -1.0 128.00
2022-01-06 126.01 127.94 125.94 127.10 -1.0 128.00
2022-01-07 127.82 128.32 126.32 126.90 0.0 0.00
2022-01-08 126.96 127.39 126.42 126.85 -1.0 128.00
2022-01-09 126.44 127.64 125.08 125.28 -1.0 128.00
2022-01-10 125.57 125.80 124.55 124.61 -1.0 125.80
2022-01-11 125.08 125.35 123.94 124.28 -1.0 125.80
2022-01-12 124.28 125.24 124.05 125.06 1.0 124.05
2022-01-13 124.68 124.85 123.13 123.54 0.0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1.0 123.85
2022-01-15 126.17 126.32 124.83 125.90 1.0 123.85
2022-01-16 126.60 128.46 126.21 126.74 1.0 123.85
2022-01-17 127.21 127.75 126.52 127.13 1.0 123.85
I would suggest iterating over all rows and checking for both of the conditions. Unfortunately, I cannot reproduce your result as your code generates a different dataframe. Nonetheless, I think the following code does what you need:
import numpy as np
import pandas as pd
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
# Produce iterable index
df = df.reset_index()
# Iterate over every row
for i in range(1, len(df)):
curr_pos = df.loc[i, 'Pos']
curr_stop = df.loc[i, 'Stop']
curr_low = df.loc[i, 'Low']
curr_high = df.loc[i, 'High']
prev_pos = df.loc[i-1, 'Pos']
prev_stop = df.loc[i-1, 'Stop']
# Check your conditions
if ((prev_pos == -1) and (curr_high > prev_stop)) or ((prev_pos == 1) and (curr_low < prev_stop)):
df.loc[i, 'Pos'] = prev_pos
df.loc[i, 'Stop'] = prev_stop
# Restore original index
df = df.set_index('index')
df
EDIT: Based on your solution, I tried to remove some operations that I think are costly. Would you mind testing the following code:
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
df_tmp = df.copy()
df_tmp[['Pos','Stop']] = df_tmp[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
df_tmp['tmp'] = (df_tmp['Pos'] == 1) & (df_tmp['Stop'] <= df_tmp['Low']) | (df_tmp['Pos'] == -1) & (df_tmp['Stop'] >= df_tmp['High'])
positions = df_tmp.groupby(['Pos', 'Stop'])['tmp'].cummin().eq(1)
df[['Pos','Stop']] = df_tmp[['Pos','Stop']].where(positions, 0)
print(df.round(2))
Use the mask, where, groupby and cummin methods
s = df[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
#-999 is necessary because `apply` which is used in the next line drops NaNs
grouped = s.fillna(-999).groupby(['Pos','Stop'], dropna = False)
#Identify days where stop has been breached with a NaN
grouped1 = grouped.apply(
lambda g: g.where(
(s['Pos'] == 1) & (s['Stop'] <= df['Low']) |
(s['Pos'] == -1) & (s['Stop'] >= df['High'])
))
stop_breached = grouped1['Pos'].notna()
positions = s.assign(tmp = stop_breached).groupby(['Pos', 'Stop'])['tmp'].cummin().eq(1)
df[['Pos','Stop']] = grouped1.where(positions, 0)
print(df.round(2))
Open High Low Close Pos Stop
2022-01-01 126.82 126.93 125.17 126.27 0.0 0.00
2022-01-02 126.56 126.99 124.78 124.85 0.0 0.00
2022-01-03 123.16 124.92 122.86 124.69 1.0 122.86
2022-01-04 125.23 127.72 125.10 127.31 1.0 122.86
2022-01-05 127.82 128.00 125.21 125.43 -1.0 128.00
2022-01-06 126.01 127.94 125.94 127.10 -1.0 128.00
2022-01-07 127.82 128.32 126.32 126.90 0.0 0.00
2022-01-08 126.96 127.39 126.42 126.85 0.0 0.00
2022-01-09 126.44 127.64 125.08 125.28 0.0 0.00
2022-01-10 125.57 125.80 124.55 124.61 -1.0 125.80
2022-01-11 125.08 125.35 123.94 124.28 -1.0 125.80
2022-01-12 124.28 125.24 124.05 125.06 1.0 124.05
2022-01-13 124.68 124.85 123.13 123.54 0.0 0.00
2022-01-14 124.07 126.16 123.85 125.89 1.0 123.85
2022-01-15 126.17 126.32 124.83 125.90 1.0 123.85
2022-01-16 126.60 128.46 126.21 126.74 1.0 123.85
2022-01-17 127.21 127.75 126.52 127.13 1.0 123.85
EDIT: The below is an updated solution that is more concise and much faster. See the comments for color and details.
Open = {'Open':np.array([126.81999969, 126.55999756, 123.16000366, 125.23000336,127.81999969, 126.01000214, 127.81999969, 126.95999908,126.44000244, 125.56999969, 125.08000183, 124.27999878,124.68000031, 124.06999969, 126.16999817, 126.59999847,127.20999908])}
High = {'High': np.array([126.93000031, 126.98999786, 124.91999817, 127.72000122,128. , 127.94000244, 128.32000732, 127.38999939, 127.63999939, 125.80000305, 125.34999847, 125.23999786, 124.84999847, 126.16000366, 126.31999969, 128.46000671, 127.75 ])}
Low = {'Low' : np.array([125.16999817, 124.77999878, 122.86000061, 125.09999847,125.20999908, 125.94000244, 126.31999969, 126.41999817, 125.08000183, 124.55000305, 123.94000244, 124.05000305, 123.12999725, 123.84999847, 124.83000183, 126.20999908, 126.51999664])}
Close = {'Close': np.array([126.26999664, 124.84999847, 124.69000244, 127.30999756,125.43000031, 127.09999847, 126.90000153, 126.84999847, 125.27999878, 124.61000061, 124.27999878, 125.05999756, 123.54000092, 125.88999939, 125.90000153, 126.73999786, 127.12999725])}
Pos = {'Pos': np.array([ 0, 0, 1, 0, -1, 0, 0, 0, 0, -1, 0, 1, 0, 1, 0, 0, 0])}
Stop = {'Stop': np.array([ 0. , 0. , 122.86000061, 0. , 128. , 0. , 0. , 0. ,0. , 125.80000305, 0. , 124.05000305, 0. , 123.84999847, 0. , 0. , 0. ])}
index = pd.date_range('2022-1-1',periods = 17)
df = pd.DataFrame(dict(Open, **High, **Low, **Close, **Pos, **Stop), index = index)
df_tmp = df.copy()
df_tmp[['Pos','Stop']] = df_tmp[['Pos','Stop']].mask(df['Stop'].eq(0)).ffill()
df_tmp['tmp'] = (df_tmp['Pos'] == 1) & (df_tmp['Stop'] <= df_tmp['Low']) | (df_tmp['Pos'] == -1) & (df_tmp['Stop'] >= df_tmp['High'])
positions = df_tmp.groupby(['Pos', 'Stop'])['tmp'].cummin().eq(1)
df[['Pos','Stop']] = df_tmp[['Pos','Stop']].where(positions, 0)
print(df.round(2))

YOLO: Wrong annotation: class_id = 6. But class_id should be [from 0 to 0]

I'm trying to train YOLO for object detection based on 8 classes by using Darknet. However, while training I receive the error
Wrong annotation: class_id = 4. But class_id should be [from 0 to 0], file: data/obj/images/IMG_8943.txt
IMG_8943.txt is one of my text files where I store my annotations which have been obtained with labelImg. I don't really understand why I'm getting this error since I have specified the number of classes within my config file:
[net]
# Testing
batch=8
subdivisions=1
# Training
batch=64
subdivisions=166
classes = 8
width=416
height=416
#filters = 39
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 4000
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
# Downsample
[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=128
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=256
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=512
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
# Downsample
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
[shortcut]
from=-3
activation=linear
######################
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18
activation=linear
[yolo]
mask = 6,7,8
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 61
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=512
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 36
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=256
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=18
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=1
num=9
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
Furthermore, I have used the following commands to set up object names:
!echo -e 'classes= 1\ntrain = data/train.txt\nvalid = data/test.txt\nnames = data/obj.names\nbackup = /mydrive/yolov32' > data/obj.data
Can anybody give me a hint what's missing?
Change the class id for each .txt file for all classes starting from 0 for first class till 7 (as total classes are 8).
Second is change the batch size in testing from 8 to 1.

How to transform, encode, or standardize a column with a list (of items) per row before machine learning model?

The data has various types of value type. As below, for categorical columns, I can apply OneHotEncoder. But I am getting the error: TypeError: argument must be a string or number to do that with the columns in which each row has a list of substring or token as in the SUBSTRING_4L and SUBSTRING_5L columns.
I have been searching on google, stackoverflow, and scikit-learn documentation for quite some time without landing on anything useful.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.impute import SimpleImputer
data = {
'AGE': [39, np.nan, 21, 13, 45, 26, np.nan, 48],
'URBAN': ['urban', np.nan, 'urban', 'rural', 'urban', 'rural', 'urban', 'urban'],
'NAME': ['jack', 'juste', 'ann', np.nan, 'jack', 'gil', 'phil', 'tyler'],
'SUBSTRING_4L': [['jack'], ['just', 'uste'], [], [], ['jack'], [], ['phil'], ['tyle', 'yler']],
'SUBSTRING_5L': [[], ['juste'], [], [], [], [], [], ['tyler']],
'DISEASE': ['healthy', 'cancer', 'cancer', 'dementia', 'cancer', 'heart', 'healthy', 'cancer'],
}
df = pd.DataFrame(data)
def transform_numerical():
x_train, x_test, y_train, y_test = train_test_split(
df[['AGE']], df['DISEASE'], test_size=0.5, random_state=3)
scaler = preprocessing.StandardScaler().fit(x_train)
x_trainT = scaler.transform(x_train)
x_testT = scaler.transform(x_test)
print(x_train)
print(x_trainT)
print()
print(x_test)
print(x_testT)
print('/////////////////////////', '\n')
transform_numerical()
def transform_categorical():
x_train, x_test, y_train, y_test = train_test_split(
df[['URBAN', 'NAME']], df['DISEASE'], test_size=0.5, random_state=3)
cat_imputer = SimpleImputer(strategy='constant', fill_value='')
cat_imputer.fit(x_train)
x_trainT = cat_imputer.transform(x_train)
x_testT = cat_imputer.transform(x_test)
encoder = preprocessing.OneHotEncoder(handle_unknown='ignore')
encoder.fit(x_trainT)
x_trainT = encoder.transform(x_trainT)
x_testT = encoder.transform(x_testT)
print(x_trainT.toarray())
print(x_train)
print()
print(x_testT.toarray())
print(x_test)
print('/////////////////////////', '\n')
transform_categorical()
def transform_list():
x_train, x_test, y_train, y_test = train_test_split(
df[['SUBSTRING_4L', 'SUBSTRING_5L']], df['DISEASE'], test_size=0.5, random_state=3)
cat_imputer = SimpleImputer(strategy='constant', fill_value='')
cat_imputer.fit(x_train)
x_trainT = cat_imputer.transform(x_train)
x_testT = cat_imputer.transform(x_test)
encoder = preprocessing.OneHotEncoder(handle_unknown='ignore')
encoder.fit(x_trainT)
x_trainT = encoder.transform(x_trainT)
x_testT = encoder.transform(x_testT)
print(x_trainT.toarray())
print(x_train)
print()
print(x_testT.toarray())
print(x_test)
print('/////////////////////////', '\n')
transform_list()
You can flatten list objects to columns by df.apply(lamda x: pd.Series(x)) and then you can use pd.get_dummies() to encode object columns.
object_cols = ["URBAN", "NAME"]
list_cols = ["SUBSTRING_4L","SUBSTRING_5L"]
features = df.drop("DISEASE", axis = 1)
features = features.drop(object_cols, axis = 1)
features = features.drop(list_cols, axis = 1)
for col in list_cols:
features = pd.concat([features, pd.get_dummies(df[col].apply(lambda x: pd.Series(x)), prefix = col)], axis = 1)
for col in object_cols:
features = pd.concat([features, pd.get_dummies(df[col], prefix = col)], axis = 1)
Which provides:
AGE SUBSTRING_4L_jack SUBSTRING_4L_just SUBSTRING_4L_phil SUBSTRING_4L_tyle SUBSTRING_4L_uste SUBSTRING_4L_yler SUBSTRING_5L_juste SUBSTRING_5L_tyler URBAN_rural URBAN_urban NAME_ann NAME_gil NAME_jack NAME_juste NAME_phil NAME_tyler
0 39.0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0
1 NaN 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0
2 21.0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0
3 13.0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
4 45.0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0
5 26.0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0
6 NaN 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0
7 48.0 0 0 0 1 0 1 0 1 0 1 0 0 0 0 0 1

Align overflow SVG content to right edge of viewport to overflow under the left

I have SVG content that overflows to the right by having 'X' coordinates that have higher value than viewBox sets. Is there any way to make the content show the right-most part and that it will overflow to the left? (like right-justify the text)
Here is the full SVG I am working with, it is a plot that overflows and I want to show the part to the right but hide the "older" points:
the reason for doing this is that points get generated automatically and I want to always show the latest dots on the viewable area and the image to "auto-scroll" to the right.
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 500 100" class="chart">
<style>
.chart {
background: white;
width: 500px;
height: 100px;
border-left: 1px dotted #555;
border-bottom: 1px dotted #555;
padding: 20px 20px 20px 0;
}
body {
/* background: #ccc; */
padding: 20px;
display: flex;
align-items: center;
justify-content: center;
}
body, html {
height: 100%;
}
</style>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="0, 0
10, 64"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="10, 64
20, 48"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="20, 48
30, 86"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="30, 86
40, 4"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="40, 4
50, 70"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="50, 70
60, 91"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="60, 91
70, 50"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="70, 50
80, 61"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="80, 61
90, 32"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="90, 32
100, 89"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="100, 89
110, 51"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="110, 51
120, 77"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="120, 77
130, 60"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="130, 60
140, 60"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="140, 60
150, 0"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="150, 0
160, 16"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="160, 16
170, 90"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="170, 90
180, 69"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="180, 69
190, 70"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="190, 70
200, 100"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="200, 100
210, 18"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="210, 18
220, 0"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="220, 0
230, 6"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="230, 6
240, 43"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="240, 43
250, 2"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="250, 2
260, 4"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="260, 4
270, 74"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="270, 74
280, 56"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="280, 56
290, 80"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="290, 80
300, 26"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="300, 26
310, 69"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="310, 69
320, 77"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="320, 77
330, 19"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="330, 19
340, 37"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="340, 37
350, 72"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="350, 72
360, 61"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="360, 61
370, 33"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="370, 33
380, 62"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="380, 62
390, 11"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="390, 11
400, 27"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="400, 27
410, 43"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="410, 43
420, 83"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="420, 83
430, 75"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="430, 75
440, 27"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="440, 27
450, 74"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="450, 74
460, 30"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="460, 30
470, 44"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="470, 44
480, 86"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="480, 86
490, 19"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="490, 19
500, 34"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="500, 34
510, 54"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="510, 54
520, 57"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="520, 57
530, 59"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="530, 59
540, 45"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="540, 45
550, 100"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="550, 100
560, 84"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="560, 84
570, 97"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="570, 97
580, 24"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="580, 24
590, 6"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="590, 6
600, 73"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="600, 73
610, 52"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="610, 52
620, 68"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="620, 68
630, 47"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="630, 47
640, 36"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="640, 36
650, 57"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="650, 57
660, 49"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="660, 49
670, 25"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="670, 25
680, 15"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="680, 15
690, 1"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="690, 1
700, 33"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="700, 33
710, 38"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="710, 38
720, 2"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="720, 2
730, 70"/>
<polyline
fill="none"
stroke="#0074d9"
stroke-width="2"
points="730, 70
740, 23"/>
</svg>

Illustrator SVG unequal coordinate exporting for shape tween

I am trying to create a SMIL animation tween between two shapes and I know that the two shapes need to have the same number of coordinates. The problem I have is when I export the two SVGs from Illustrator the coordinates do not match up in terms of how many sets there are. Both shapes are created form the same source with no extra points added or removed, just a curve at the top straightened out.
Here is my SVG on Codepen http://codepen.io/tands/pen/LEmoOK and below...
<!-- smile circle -->
<svg version="1.1" x="0px" y="0px" viewBox="0 0 453.5 290" enable-background="new 0 0 453.5 290" xml:space="preserve">
<path fill="#000000" d="
M226.8,
34.9
C146.3,
34.9,
71.5,
22,
8.9,
0
C3.1,
20.1,
0,
41.3,
0,
63.2
C0,
188.5,
101.5,
290,
226.8,
290
S453.5,
188.5,
453.5,
63.2
c0-21.9-3.1-43.1-8.9-63.2
C382.1,
22,
307.2,
34.9,
226.8,
34.9z
H0z
"/>
</svg>
<!-- partial circle -->
<svg version="1.1" x="0px" y="0px" viewBox="0 0 453.5 290" enable-background="new 0 0 453.5 290" xml:space="preserve">
<path fill="#000000" d="
M8.9,
0
C3.1,
20.1,
0,
41.3,
0,
63.2
C3.1,
20.1,
0,
41.3,
0,
63.2
C0,
188.5,
101.5,
290,
226.8,
290
S453.5,
188.5,
453.5,
63.2
c0-21.9-3.1-43.1-8.9-63.2
C0,
0,
0,
0,
0,
0
H8.9z
"/>
</svg>
So I guess my question is how do I achieve the shape tween between the two shapes on codepen even though they have different coordinates?

Resources