I am not able to convert timestamps in UTC to AKST timezone in Spark 3.0. The same works in Spark 2.4. All other conversions work (to EST, PST, MST etc).
Appreciate any inputs on how to fix this error.
Below command:
spark.sql("select from_utc_timestamp('2020-10-01 11:12:30', 'AKST')").show
returns the error:
java.time.zone.ZoneRulesException: Unknown time-zone ID: AKST
Detailed log:
java.time.zone.ZoneRulesException: Unknown time-zone ID: AKST
at java.time.zone.ZoneRulesProvider.getProvider(ZoneRulesProvider.java:272)
at java.time.zone.ZoneRulesProvider.getRules(ZoneRulesProvider.java:227)
at java.time.ZoneRegion.ofId(ZoneRegion.java:120)
at java.time.ZoneId.of(ZoneId.java:411)
at java.time.ZoneId.of(ZoneId.java:359)
at java.time.ZoneId.of(ZoneId.java:315)
at org.apache.spark.sql.catalyst.util.DateTimeUtils$.getZoneId(DateTimeUtils.scala:62)
at org.apache.spark.sql.catalyst.util.DateTimeUtils$.fromUTCTime(DateTimeUtils.scala:833)
at org.apache.spark.sql.catalyst.expressions.FromUTCTimestamp.nullSafeEval(datetimeExpressions.scala:1299)
at org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:552)
at org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:457)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:52)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1$$anonfun$applyOrElse$1.applyOrElse(expressions.scala:45)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:321)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:321)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:96)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:118)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:118)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:129)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:134)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:134)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:139)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:139)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:96)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:45)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$$anonfun$apply$1.applyOrElse(expressions.scala:44)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:321)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:321)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.applyFunctionIfChanged$1(TreeNode.scala:380)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:416)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:248)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:414)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:362)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDown$(AnalysisHelper.scala:147)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:310)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:44)
at org.apache.spark.sql.catalyst.optimizer.ConstantFolding$.apply(expressions.scala:43)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:89)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:116)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:98)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:116)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:82)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:121)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:153)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:153)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:82)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:79)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$4(QueryExecution.scala:217)
at org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:381)
at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$writePlans(QueryExecution.scala:217)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:227)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:96)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:207)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:88)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3653)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2737)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2944)
at org.apache.spark.sql.Dataset.getRows(Dataset.scala:301)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:338)
at org.apache.spark.sql.Dataset.show(Dataset.scala:864)
at org.apache.spark.sql.Dataset.show(Dataset.scala:823)
at org.apache.spark.sql.Dataset.show(Dataset.scala:832)
... 47 elided
Adding further to mck's answer. You are using the old Java data-time API zone's short IDs. According to this Databricks blog post A Comprehensive Look at Dates and Timestamps in Apache Spark™ 3.0, Spark migrated to the new API since version 3.0:
Since Java 8, the JDK has exposed a new API for date-time manipulation
and time zone offset resolution, and Spark migrated to this new API in
version 3.0. Although the mapping of time zone names to offsets has
the same source, IANA TZDB, it is implemented differently in Java 8
and higher versus Java 7.
You can verify it by opening spark-shell and list available zones like this:
import java.time.ZoneId
import scala.collection.JavaConverters._
ZoneId.SHORT_IDS.asScala.keys
//res0: Iterable[String] = Set(CTT, ART, CNT, PRT, PNT, PLT, AST, BST, CST, EST, HST, JST, IST, AGT, NST, MST, AET, BET, PST, ACT, SST, VST, CAT, ECT, EAT, IET, MIT, NET)
That said, you should not use abbreviations when you specify timezones, instead use area/city format. See Which three-letter time zone IDs are not deprecated?
Seems it can't understand AKST, but Spark 3 seems to understand America/Anchorage, which I suppose to have the timezone AKST:
spark.sql("select from_utc_timestamp('2020-10-01 11:12:30', 'America/Anchorage')").show
i'm testing bandwidth using fio benchmark tool.
here is my hardware spec
2 socket per 10cores
Kernel version : 4.8.17
intel SSD 750 series
cpu : Intel(R) Xeon(R) CPU E5-2650 v3 # 2.3GHZ , ssd : Intel Solid State Drive 750 series, 400GB, 20nm Intel NAND Flash Memory MLC. NVMe PCIe 3.0*4 ADD-In card.
I could invalidate the buffer/page cache parts of the files to be used prior to starting I/O when i made the fio file.
And i used O_DIRECT flag(non-buffered IO) to bypass the page cache and used linux native asynchronous I/O request.
when i test with one core, fio output says that
bandwidth which core0 received is 1516.7MB/s.
it doesnt exceed bandwidth limitation of intel SSD 750. it doesn't matter.
here is test1 code.
[global]
filename=/dev/nvme0n1
runtime=10
bs=4k
ioengine=libaio
direct=1
iodepth=64
invalidate=1
randrepeat=0
log_avg_msec=1000
time_based
thread=1
size=256m
[job1]
cpus_allowed=0
rw=randread
but, when i do this with 3cores, the total bandwidth of cores is exceeds
intel SSD 750 bandwidth limitation.
total amount of bandwidth of 3cores is about 3000MB/s.
according to intel SSD 750 spec, my intel SSD bandwidth limitation is 2200MB/s.
here is code of test2(3 cores)
[global]
filename=/dev/nvme0n1
runtime=10
bs=4k
ioengine=libaio
direct=1
iodepth=64
invalidate=1
randrepeat=0
log_avg_msec=1000
time_based
thread=1
size=256m
[job1]
cpus_allowed=0
rw=randread
[job2]
cpus_allowed=1
rw=randread
[job3]
cpus_allowed=2
rw=randread
i don't know how this is happened.
here is fio test output of test1
job1: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 thread
job1: (groupid=0, jobs=1): err= 0: pid=6924: Mon Jan 29 20:14:33 2018
read : io=15139MB, bw=1513.8MB/s, iops=387516, runt= 10001msec
slat (usec): min=0, max=42, avg= 1.97, stdev= 1.12
clat (usec): min=5, max=1072, avg=162.70, stdev=20.17
lat (usec): min=6, max=1073, avg=164.74, stdev=20.39
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151],
| 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161],
| 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 179], 95.00th=[ 211],
| 99.00th=[ 229], 99.50th=[ 262], 99.90th=[ 318], 99.95th=[ 318],
| 99.99th=[ 334]
lat (usec) : 10=0.01%, 20=0.01%, 50=0.02%, 100=0.03%, 250=99.35%
lat (usec) : 500=0.60%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=22.32%, sys=77.64%, ctx=102, majf=0, minf=421
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=3875556/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=15139MB, aggrb=1513.8MB/s, minb=1513.8MB/s, maxb=1513.8MB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
nvme0n1: ios=3834624/0, merge=0/0, ticks=25164/0, in_queue=25184, util=99.61%
here is fio output of test2(3cores)
job1: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
job2: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
job3: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 3 threads
job1: (groupid=0, jobs=1): err= 0: pid=6968: Mon Jan 29 20:14:53 2018
read : io=10212MB, bw=1021.2MB/s, iops=261413, runt= 10001msec
slat (usec): min=1, max=140, avg= 2.49, stdev= 1.23
clat (usec): min=4, max=970, avg=241.78, stdev=138.10
lat (usec): min=7, max=972, avg=244.35, stdev=138.09
clat percentiles (usec):
| 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 64],
| 30.00th=[ 135], 40.00th=[ 225], 50.00th=[ 306], 60.00th=[ 330],
| 70.00th=[ 346], 80.00th=[ 366], 90.00th=[ 390], 95.00th=[ 410],
| 99.00th=[ 438], 99.50th=[ 446], 99.90th=[ 474], 99.95th=[ 502],
| 99.99th=[ 668]
lat (usec) : 10=0.01%, 20=2.03%, 50=14.39%, 100=9.67%, 250=16.14%
lat (usec) : 500=57.71%, 750=0.05%, 1000=0.01%
cpu : usr=17.32%, sys=71.84%, ctx=182182, majf=0, minf=318
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=2614396/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
job2: (groupid=0, jobs=1): err= 0: pid=6969: Mon Jan 29 20:14:53 2018
read : io=10540MB, bw=1053.1MB/s, iops=269802, runt= 10001msec
slat (usec): min=1, max=35, avg= 1.93, stdev= 0.97
clat (usec): min=5, max=903, avg=234.55, stdev=139.14
lat (usec): min=7, max=904, avg=236.56, stdev=139.13
clat percentiles (usec):
| 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 30], 20.00th=[ 57],
| 30.00th=[ 112], 40.00th=[ 207], 50.00th=[ 298], 60.00th=[ 330],
| 70.00th=[ 346], 80.00th=[ 362], 90.00th=[ 386], 95.00th=[ 402],
| 99.00th=[ 426], 99.50th=[ 438], 99.90th=[ 462], 99.95th=[ 494],
| 99.99th=[ 628]
lat (usec) : 10=0.01%, 20=3.22%, 50=14.51%, 100=10.76%, 250=15.48%
lat (usec) : 500=55.97%, 750=0.05%, 1000=0.01%
cpu : usr=26.08%, sys=59.08%, ctx=377522, majf=0, minf=326
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=2698293/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
job3: (groupid=0, jobs=1): err= 0: pid=6970: Mon Jan 29 20:14:53 2018
read : io=10368MB, bw=1036.8MB/s, iops=265406, runt= 10001msec
slat (usec): min=1, max=102, avg= 2.48, stdev= 1.24
clat (usec): min=5, max=874, avg=238.10, stdev=139.10
lat (usec): min=7, max=877, avg=240.66, stdev=139.09
clat percentiles (usec):
| 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 72],
| 30.00th=[ 113], 40.00th=[ 193], 50.00th=[ 290], 60.00th=[ 330],
| 70.00th=[ 350], 80.00th=[ 370], 90.00th=[ 398], 95.00th=[ 414],
| 99.00th=[ 442], 99.50th=[ 454], 99.90th=[ 474], 99.95th=[ 498],
| 99.99th=[ 628]
lat (usec) : 10=0.01%, 20=1.51%, 50=12.00%, 100=13.78%, 250=17.81%
lat (usec) : 500=54.84%, 750=0.05%, 1000=0.01%
cpu : usr=17.96%, sys=71.88%, ctx=170809, majf=0, minf=319
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued : total=r=2654335/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: io=31121MB, aggrb=3111.9MB/s, minb=1021.2MB/s, maxb=1053.1MB/s, mint=10001msec, maxt=10001msec
Disk stats (read/write):
nvme0n1: ios=7883218/0, merge=0/0, ticks=1730536/0, in_queue=1763060, util=99.52%
Hmm...
#peter-cordes makes a good point about (device) cache. Doing a Google search returns https://www.techspot.com/review/984-intel-ssd-750-series/ which says the following:
Also onboard are five Micron D9PQL DRAM chips which are used as a 1.25GB cache and the specs say this is DDR3-1600 memory.
Given that you're restricting fio to working in the same 256MByte region for all threads it could well be all your I/O easily fits into the device's cache. There's no dedicated way of discarding a device's cache (as opposed to Linux's buffer cache) other than natural means though so I'd recommending making your working region dramatically bigger (e.g. 10s - 100s gigabytes) to reduce the odds of a thread's data being prefetched by another thread's accesses.
Additionally I would ask "what data did you put down on to the SSD before you read it back"? SSDs are typically "thin" in the sense they can be aware of regions that have never been written or where it has been told that a region has been explicitly discarded. Because of this reading from such regions means the SSD has little work to do and can return data extremely quickly (like what an OS does when you read from a hole in a sparse file). In "real life" it is rare that you choose to read something that you've never written, so doing such a thing will distort your results.