• 93 Posts
  • 794 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • have bandwidth that is some % of carrier frequency,

    In my limited ham radio experience, I’ve not seen any antennas nor amplifiers which specify their bandwidth as a percentage of “carrier frequency”, and I think that term wouldn’t make any sense for antennas and (analog) amplifiers, since the carrier is a property of the modulation; an antenna doesn’t care about modulation, which is why “HDTV antennas” circa 2000s in the USA were merely a marketing term.

    The only antennas and amplifiers I’ve seen have given their bandwidth as fixed ranges, often accompanied with a plot of the varying gain/output across that range.

    going up in frequency makes bandwidth bigger

    Yes, but also no. If a 200 kHz FM commercial radio station’s signal were shifted from its customary 88-108 MHz band up to the Terahertz range of the electromagnetic spectrum (where infrared and visible light are), the bandwidth would still remain 200 kHz. Indeed, this shifting is actually done, albeit for cable television, where those signals are modulated onto fibre optic cables.

    What is definitely true is that way up in the electromagnetic spectrum, there is simply more Hertz to utilize. If we include all radio/microwave bands, that would be the approximate frequencies from 30 kHz to 300 GHz. So basically 300 GHz of bandwidth. But for C band fibre optic cable, their usable band is from 1530-1565 nm, which would translate to 191-195 THz, with 4 THz of bandwidth. That’s over eight times larger! So much room for activities!

    For less industrial use-cases, we can look to 60 GHz technology, which is used for so-called “Wireless HDMI” devices, because the 7 GHz bandwidth of the 60 GHz band enables huge data rates.

    To actually compare the modulation of different technologies irrespective of their radio band, we often look to special efficiency, which is how much data (bits/sec) can be sent over a given bandwidth (in Hz). Higher bits/sec/Hz means more efficient use of the radio waves, up to the Shannon-Hartley theoretical limits.

    getting higher % of bandwidth requires more sophisticated, more expensive, heavier designs

    Again, yes but also no. If a receiver need only receive a narrow band, then the most straightforward design is to shift the operating frequency down to something more manageable. This is the basis of superheterodyne FM radio receivers, from the era when a few MHz were considered to be very fast waves.

    We can and do have examples of this design for higher microwave frequency operation, such as shifting broadcast satellite signals down to normal television bands, suitable for reusing conventional TV coax, which can only carry signals in the 0-2 GHz band at best.

    The real challenge is when a massive chunk of bandwidth is of interest, then careful analog design is required. Well, maybe only for precision work. Software defined radio (SDR) is one realm that needs the analog firehose, since “tuning” into a specific band or transmission is done later in software. A cheap RTL-SDR can view a 2.4 MHz slice of bandwidth, which is suitable for plenty of things except broadcast TV, which needs 5-6 MHz.

    LoRa is much slower, caused by narrowed bandwidth but also because it’s more noise-resistant

    I feel like this states the cause-and-effect in the wrong order. The designers of LoRa knew they wanted a narrow-band, low-symbol rate air interface, in order to be long range, and thus were prepared to trade away a faster throughput to achieve that objective. I won’t say that slowness is a “feature” of LoRa, but given the same objectives and the limitations that this universe imposes, no one has produced a competitor with blisteringly fast data rate. So slowness is simply expected under these circumstances; it’s not a “bug” that can be fixed.

    In the final edit of my original comment, I added this:

    Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.


  • Also, what if things that require very little data transmission used something lower than 2.4Ghz for longer range? (1Ghz or something?)

    No one seemed to touch upon this part, so I’ll chime in. The range and throughput of a transmission depends on a lot of factors, but the most prominent are: peak and avg output power, modulation (the pattern of radio waves sent) and frequency, background noise, and bandwidth (in Hz; how much spectrum width the transmission will occupy), in no particular order.

    If all else were equal, changing the frequency to a lower band wouldn’t impact range or throughput. But that’s hardly ever the case, since reducing the frequency imposes limitations to the usable modulations, which means trying to send the same payload either takes longer or uses more spectral bandwidth. Those two approaches have the side-effect that slower transmissions are more easily recovered from farther away, and using more bandwidth means partial interference from noise has a lesser impact, as well as lower risk of interception. So in practice, a lower frequency could improve range, but the other factors would have to take up the slack to keep the same throughput.

    Indeed, actual radio systems manipulate some or all of those factors when longer distance reception is the goal. Some systems are clever with their modulation, such as FT8 used by amateur radio operators, in order to use low-power transmitters in noisy radio bands. On the flip side, sometimes raw power can overcome all obstacles. Or maybe just send very infrequent, impeccably narrow messages, using an atomic clock for frequency accuracy.

    To answer the question concretely though, there are LoRa devices which prefer to use the ISM band centered on 915 MHz in The Americas, as the objective is indeed long range (a few hundred km) and small payload (maybe <100 Bytes), and that means the comparatively wider (and noisier) 2.4 GHz band is unneeded and unwanted. But this is just one example, and LoRa has many implementations that change the base parameters. Like how MeshCore and Meshtastic might use the same physical radios but the former implements actual mesh routing, while the latter floods to all nodes (a bad thing).

    But some systems like WiFi or GSM can be tuned for longer range while still using their customary frequencies, by turning those other aforementioned knobs. Custom networks could indeed be dedicated to only sending very small amounts of data, like for telemetry (see SCADA). That said, GSM does have a hard cap of 35 km, for reasons having to do with how it handles multiple devices at once.

    Radio engineering, like all other disciplines of engineering, centers upon balancing competing requirements and limitations in elegant ways. Radio range is the product of intensely optimizing all factors for the desired objective.




  • Let’s say you have a household of 5 people with 20 devices in the LAN, one can be infected and running some bot, you do not want to block 5 people and 20 devices.

    Why not, though? If a home network is misbehaving, whoever is maintaining that network needs to: 1) be aware that there’s something wrong, and 2) needs to fix it on their end. Most homes don’t have a Network Operations Center to contact, but throwing an error code in a web browser is often effective since someone in the household will notice. Unlike institutional users, home devices are not totally SOL when blocked, as they can be moved to use cellular networks or other WiFi networks.

    At the root of the problem, NAT deprives the users behind it of agency: they’re all in the same barrel, and the maxim about bad apples will apply. You’re right that it gets even worse for CGNAT, but that’s more a reason to refuse all types of NAT and prefer end-to-end IPv6.


  • exclusively developed brakes

    This worries me somewhat. On one hand, when the objective is a sub 10 kg ebike, all options should be available, even if that means having to custom-build components, and brakes would be no exception. On the other hand, there are some parts on bikes which have been refined by existing engineering efforts for racing bikes, which includes brakes, although racing did remove the incentive for lighter bikes when the bikes became structurally unsound.

    Even still, it’s hard for me to imagine how much mass can be shaved off of brake handles, calipers, and discs, such that they make the savings worthwhile. The flowery language on their website does not adequately explain exact what their brake design improves upon, other than being “engineered in Switzerland” and “exceptional reliability and performance”.

    Still, 10 kg is quite the achievement unto itself. Though the €12k price tag means this won’t be an everyday city bike for the masses.




  • I’m of the opinion that gloves are a solution for avoiding or limiting calluses, but with the asterisks that: 1) it can also be a (minor) crutch if not also developing your grip strength in tandem, and 2) that gloves must be properly fitting or else it might be even worse than no gloves at all.

    If the calluses seem to relate only to isolation exercises, then #1 is less of an issue. But if they appear due to compound exercises, then that’s a clue to improve your grip, as a callus ostensibly is due to uneven pressure around your hands.

    For #2, ill-fitting gloves might grip the implement or barbell or dumbbell, but if the glove slides against your palm, then it’ll bunch up and that virtually guarantees uneven pressure. You might also consider fingerless gloves, so that there’s material only where it’s needed to avoid palm calluses.


  • I happened to have a friend that is into lifting, but also my work has an on-site gym with a dedicated fitness instructor. But had those circumstances not been available, I would have considered hiring a personal trainer for a few sessions. Another option is to see what your local city might offer in terms of continuing education, since that often also includes time with a fitness expert.

    That said, while it’s true that a video won’t properly describe the full motion for an exercise, it should be the case that watching a video should avoid all the major pitfalls that might cause injury. If there’s a risk that you’ll injure yourself, it might be because you’re trying too much weight too early.

    If needed, you can even try an exercise with little or no weight, to make sure your form is dialed in. Also, do not underestimate the utility of setting up a camera on a tripod to record your form. It’s something which is fairly easy to do, and more people can quickly review a video and give tips, even online.





  • @jayemar already gave a valid counterpoint, about how to select the technocrats in the first place. But let’s suppose we did somehow select the best and brightest of their fields. The next problem is that life is messy, and there often isn’t a single answer or criteria which determines what is in the public interest.

    Btw, for everyone’s benefit, J-PAL is the Jameel Poverty Action Lab at MIT, with branches covering different parts of the world, since policies on addressing poverty necessarily differ depending on local circumstances. They might be described as a research institute or maybe a think tank, as they advocate for more-effective solutions to poverty and give advice on how to do that.

    Poverty, as an objective, can be roughly distilled into bringing everyone above some numerical economic figure. There may be different methods that bring people out of poverty, but it’s fairly straightforward to assess the effectiveness of those solutions, by seeing how many people exit poverty and how much the solution costs.

    Now take something like – to stay with economics – management of the central bank. The USA central bank (The Federal Reserve) was created with a dual mandate, which means they manage the currency with care to: 1) not let inflation run amok, and 2) keep USA unemployment low. The dual mandate is tricky because one tends to begat the other. So when both strike, what should a technocrat do? Sacrifice one goal short-term to achieve the other long-term? Try attacking both but perhaps fail at either?

    Such choices are not straight yes/no or go/no-go questions, but are rightfully questions of policy and judgement. Is it fine to sell 10% of parkland for resource extraction if it will iron-clad guarantee the remaining 90% is protected as wilderness for time immemorial? How about 25%? 60%?

    Subject matter experts (SMEs) are excellent at their craft, but asking them to write public policy – even with help from other SMEs – won’t address the fuzzy dilemmas that absolutely arise in governance.

    In a democratic republic, voters choose not only the politician with views they agree with, but also are subscribing to that politician’s sense of judgement for all of life’s unknowns. Sometimes this goes well, sometimes that trust is misplaced. Although it’s imperfect, this system can answer the fuzzy dilemmas which technocracies cannot.


  • Irrespective of any subsequent arrests made, publicizing evidence of actual criminal activity is generally a social good, which often doesn’t (but can) overlap with vigilantism. Taking the term broadly, vigilantism is doing something that the law can’t/won’t do. Wikipedia discusses the various definitions, some of which require the use of force (something conventionally reserved to the law or government) but the broadest definition would technically include whistleblowing and community activism. On the flip side, certain forms of publicizing evidence are illegal, such as leaking designated national secrets.

    From a law perspective, in the USA, apart from that rather narrow exception and a few others, the First Amendment’s guarantee of free speech provides the legal cover to reveal the genuine evidence of someone’s criminal conduct, because criminal matters are: 1) in the public interest to expose, 2) an assailant cannot assert a privacy interest upon the evidence of their crime, and 3) the truth cannot be penalized by defamation claims. That basically covers any applicable USA free speech exceptions, although someone accused could file a frivolous lawsuit to financially harass the one who exposed the evidence. Such frivolous lawsuits are functionally banned only in the handful of states with anti-SLAPP laws, which is why more states and the feds need to adopt anti-SLAPP protections.

    So from a legal perspective, leaking evidence of a crime is generally allowed. From a moral perspective, most would agree that it’s a good thing, and it’s why we have things like public trials, to showcase evidence. But does exposing crimes on one’s own constitute vigilantism? I would say no, but others with a different definition might say yes, even if they also agree that’s it’s legally and morally correct.


  • In a slight departure from the norm, the article’s title suits the article but the subheader is superfluous and unsupported. What on earth does pursuing advanced degrees have to do with railroad antitrust laws? The only color that this blurb adds to the article is the ugliest sort of “yellow”.

    The subheader’s premise is wholly betrayed by the article’s final conclusion:

    In doing so, the Court cited a Rule of Reason it first articulated in 1899—that large size and monopoly in themselves are not necessarily evil.

    So yes, certain trust-like behavior can be worthy of “regulatory and judicial punishment”, because that’s exactly what the public policy demands. Does it depend on a lot of things? Of course! Most things do!

    I bemoan articles that lean into an assumption that something is cut-and-dry, because that’s almost never the case, but here, whichever editor wrote that subheader did the author dirty. Because the article body is mostly fine, let down by bad editorship.



  • You and friend 1 have working setups. Friend 2 can’t seem to get their setup to work. So the problem has to be specific to friend 2’s machine or network.

    To start at the very basics, when WG is disabled, what are friend 2’s DNS servers, as listed in “/etc/resolve.conf” (Linux) or in “ipconfig” on Windows. This can be an IPv4 or IPv6 address. Whatever it is, take note of it. Also try to ping it and make sure the ping is successful.

    Then have friend 2 enable WG. Now try pinging the same DNS servers again. If this fails, you are one step closer to the problem. If this succeeds, then check to see if WG caused new DNS servers to replace the former ones.

    One possibility is that friend 2’s home network also uses 192.168.8.X, and so the machine tries to reach the DNS servers by going through WG. But we need more details before making this conclusion.

    You also said friend 2 can ping 9.9.9.9 (aka Quad9), but is this friend using Quad9 as their DNS server? If so, what exactly is observed when you say that “DNS doesn’t resolve”? Is this an error in a browser or the result from running “nslookup” in the command line?

    IPv6 isn’t likely to be directly responsible for DNS resolution failures, but a misconfigured WG tunnel that causes an IPv6 DNS server to be blackholed is one way to create resolution failure. It may also just be red herring, and the issue is contained entirely to IPv4. I would not recommend turning off IPv6, because that’s almost always the wrong answer and sweeps the other problems under the rug.



  • The only reference to a manual i could find for that ebike is the “New York Ebike Battery Operating Manual”, which seems to be Velotric’s answer for New York State’s S7503B bill from 2024, mandating that ebike sellers provide a manual.

    The document has this to say:

    1. The charger is designed for indoor use only. Always charge the battery in a well-ventilated room with a smoke detector, and the ideal charging temperature for the battery is between 50°F and 77°F (10°C to 25°C).
    1. The lithium battery’s limit charging temperature range is between 41°F and 86°F (5°C to 30°C). When the internal battery temperature exceeds this range, charging will automatically stop and enter protection mode.

    It seems that this 10-25 C recommended limit is only for when charging the battery, not for general storage. For storage, this manual is silent as to what happens at different temperatures, but you must remember that this is a bicycle: a mode of transportation which can and will find itself exposed to the elements, come rain or sunlight.

    As a general rule for li ion batteries, the ideal storage temperature is somewhere between -20 to 25 C, to minimize self discharge. Warmer storage will mean faster aging of the battery cells, but it would not imply some sort of future sudden and catastrophic failure. All batteries are mortal.

    These charging and storage temperatures are further still distinct from the operating temperatures, meaning when power can be drawn from the battery.

    As for the mechanical components of the bike, those will have the same, wide temperature range as storage as non-electric bicycle. After all, it’s mostly metal with some rubber and plastics. What will happen, though, with exposure to sudden temperature changes – ie bringing a bicycle from a cold, 5 C cellar to the outdoors when it’s 45 C – is that some plastics and lubricants will fail quicker. This is not specific to ebikes at all.

    For your case, this uninsulated shed will not be ideal, and this particular ebike model does not have a quick-remove battery. If it did, my advice would be to bring the battery indoors to charge and store it, since 50 C is inappropriate for charging. But barring that, can this shed receive improved airflow? Or perhaps the bike can be chained to the shaded wall outside of the shed?