On Oct 23, 2015, at 11:23 AM, Devin Akin [mailto:Devin.Akin@DivDyn.net] wrote:
You need to practice being more thorough. 😀 hehe.
Nice work on this! I agree that this test supports Chuck’s assertion.
Thanks a ton for the effort!
On Oct 24, 2015, at 8:12 AM, Chuck Lukaszewski [mailto:firstname.lastname@example.org] wrote:
Had a really long week here, will reply more tomorrow.
Very nice test design and results. Appreciate you checking out my claim here.
On Nov 16, 2015, at 4:08 AM, Peter Mackenzie
Sorry for how long it has taken me to share my test results with you, I have been crazy busy here.
Please find attached a very quick write up of my first controlled test. Although I have written a short conclusion section. I’m trying to not draw to many conclusion at this stage and would like to do some more tests.
On Nov 16, 2015, at 1:29 PM, Devin Akin [mailto:Devin.Akin@DivDyn.net] wrote:
I think the procedure was sound, but it looks like to me that your APs were on different channels per your screenshot. If so, that would explain the results. Please take a look and see if that was the case.
On Nov 19, 2015, at 6:25 AM, Peter Mackenzie [mailto:email@example.com] wrote:
So it would appear that one of AP’s did changed channel at some point during my testing. I apologized for the “school boy” error, I really should have noticed this. I’m normally so obsessed with the detail, so I’m not sure how I missed it. As I have no proof when the channel change occurred, I’m disregarding all my previous results and I have re-run the tests, using the same procedure. This time making sure that the AP stays on the same channel.
So please find attached my latest results.
On Nov 21, 2015, at 12:06 PM, Rick Murphy [mailto:firstname.lastname@example.org] wrote:
Thank you for sharing your research. Very interesting results. I wonder if the fact that you were using 40 MHz wide channels during the test would have had any effect on these results?
On Nov 23, 2015, at 8:52 AM, Peter Mackenzie [mailto:email@example.com] wrote:
I agree, It would be worth trying a 20MHz test as a comparison.
The next set of testing should include:
20MHz downlink test
20MHz uplink test
40MHz downlink test
40MHz uplink test
Protocol captures to be taken at both locations with client-matched adaptors for all tests.
Please let me know if anybody thinks anything else should be tested
On Nov 23, 2015, at 6:19 PM, Chuck Lukaszewski [mailto:firstname.lastname@example.org] wrote:
Great to see all the work on this. Peter, love your attention to detail. Keep it coming!
I have run many many of these spatial reuse type tests. Both of Peter’s test runs clearly suggest that his cells are too isolated from one another. The combined goodput should be marginally higher than a single cell. Instead, he’s almost getting full reuse in both V1 and V2 test reports.
Rick’s results are more consistent with what we see here when the cells are properly separated (e.g. just inside one another’s preamble ranges). BTW – a quick way to test this is to see whether STA2 can pass traffic to AP1, and vice versa. If the cells are truly overlapping then this will be possible at MCS0. If you can’t pass traffic then the cells are independent collision domains.
Please note that there are some subtle but important differences between Peter and Rick’s test designs at the radio level.
– Peter’s choice of channel 44 limits significantly limits EIRP, might be better to test on Ch 108 or 108+. Suggest not to use a channel on the band edge as for most vendors these are slightly reduced EIRP from inside channels due to spurious emissions issues.
– By comparison, Rick’s test uses 2.4GHz and HT20. since his company is USA based I assume this is with full 36dBm allowed EIRP (or whatever the tested devices are capable of). As opposed to Peter using HT40 with ETSI limit of 23dBm.
– Now you could say this doesn’t matter since its RX power on far side that matters for this test. I nominally agree. I think the key difference is the HT20 vs. HT40 in these two tests. But it is possible that reduced launch power is resulting in different channel fading in the two tests.
Also, I note that there is some difference between Peter’s two test runs.
– V2 shows -50 and -61 respectively inside each cell, whereas V1 shows -56 and -50. So V2 has a 10dB AP-STA SNR delta as compared with V1 which has a 6dB delta. This likely explains the reduced goodput in the AP2 cell.
Some general observations to questions raised over the last few days.
• 40 vs 20.
o Yes this will make a difference for a test of this type due to the discussion we are having on the other thread about reduced SNRs.
o Peter’s configuration is right at the limit of coverage for HT40 – here is the table from 7131 data sheet (attached). It’s very possible that the total goodput being measured is additive because the cells are outside one another’s collision radius.
o HT20 improves sensitivity to -93. Suggest if anyone is going to put more time into this to rerun with 20, or alternatively close up the physical distance to improve SINRs by 3-4dB for good measure.
o Alternatively, move to channel 100 and go to max EIRP. Might buy you another 6dB.
• Up vs. down
o Definitely a different test.
o This one will be tricky because the 4965 probably has lower EIRP than the AP.
o Best way to test this is fire up a soft AP on the two laptops and check RSSI to see if there are any big RX power deltas.
o 4965 RX sensitivity may also be worse than the 7131. I’m not aware that the Intel specs are public, so not sure what these values are.
• Separate iperf servers
o I think this is a NOOP at the throughput levels we are talking about here (11n 2SS)
o However, given that Peter is using distance and structural loss to achieve the signal levels, it’s probably simpler from a cabling perspective to have 2 servers.
o We generally used to use a single IxChariot server for this type of test, but in my lab building I have a common cabling so it’s easy.
o I will say that for 11ac 3SS VHT80 testing we have gone to separate IxChariot wired endpoints since a single cell can generate 850Mbps+ TCP/UDP goodput.
It is expected that data frames will not be decodable on the far side, as their payloads require high SNR. Only the preambles will make it across.
Something else I didn’t talk about in person but is relevant. With 11n HT, a lot of vendors do not do RTS/CTS. Not sure about these specific products. So depending on exact timing of when BSS1 sends preamble for a data frame, BSS2 could miss it if it is already TXing. It’s quite possible to see a slight increase in 2 BSS test as compared with 1 BSS test for this reason, especially when operating at low SINR levels which then allow each BSS to get through.
If you enable RTS/CTS with 11n, or if you test with 11ac, you will get a much clearer result. The RTS at 6Mbps rate will fully clear both BSS at both L1 (L-SIG) and L2 (NAV) levels.
On Nov 24, 2015, at 5:44 AM, Peter Mackenzie [mailto:email@example.com] wrote:
Thank you for your email and the level of detail you have put into it, really helpful.
I just want to clarify one of your points below:
BTW – a quick way to test this is to see whether STA2 can pass traffic to AP1, and vice versa. If the cells are truly overlapping then this will be possible at MCS0. If you can’t pass traffic then the cells are independent collision domains
I agree if you can pass data then the cells are truly overlapping and the combined goodput should be marginally higher than a single cell. This would be constant with other testing\lab exercises I have performed. But is this what we are testing here?
Maybe I miss understood what you were saying at the Wi-Fi Trek conference. But if I understood what you said correctly, the theory goes that if a STA can hear a neighbouring transmission enough to decode a valid PLCP header it will set CCA busy and it is irrelevant whether or not the STA can successfully transmit a frame back to the neighbouring device.
On Nov 24, 2015, at 12:59 PM, Chuck Lukaszewski
We are testing whether 2 BSS that are < -82dBm relative to each other block one another’s transmissions. E.g. with mutual RSSI in the -82 to -93 range and SNR > 4dB.
So blockage for any RSSI value below -82 proves my assertion. You may be trying to push it too low. (so -89dBm RX sensitivity is not helpful with a -93dBm noise floor, in that case we’d need an extra 2-3dB of link margin to ensure the preambles and control frames are being decoded on other side)
To verify the cells are within earshot of one another, verify that STA2 at -85 relative to AP1 should be able to associate and pass traffic at MCS0. Same with STA1 at -85dBm relative to AP2.
If you cannot do this, then by definition the two BSS are completely independent and when you run both you will get 2X goodput.
On Nov 25, 2015, at 5:31 AM, Devin Akin
Thanks to all who are actively digging into this thread.
1. Is there a reference doc for the 4dB SNR?
2. Was there a reasonable rationale behind 11n clients/APs not using RTS/CTS?…especially since 11ac clients/APs supposedly use it?
The rest was very clear.
On Nov 25, 2015, at 6:47 AM, Rick Murphy
I was also wondering if there was anything documented about the 4 dB SNR level…
On Nov 29, 2015, at 5:23 PM, Chuck Lukaszewski
Been trying to locate something, finally found it staring me in the face.
Figure 5.8 from the Perahia book that Devin sent around captures this nicely. Look at the line with the “*” for 20MHz MCS0.
At 4 dB SNR, MCS0 will produce a packet error 15% of the time. To drop the PER to ~1% which is the usual target for a given modulation you have to be at 7dB.
Sent from my iPad
On Nov 30, 2015, at 10:20 PM, Devin Akin
Were those 1% and 15% numbers estimations based on that chart? I looked at the line you mentioned, but didn’t see that as 1% and 15%…. help?
On Dec 1, 2015, at 12:02 AM, Chuck Lukaszewski
Those are the conversions of the PER. 10^-2 = .01 = 1%