Iso-thermal piston compressor using integral cooling channels

Pochari Technologies has devised a novel form of iso-thermal piston compressor.
A dense pattern of relatively thin wall cooling tubes extend out and fasten to the compressor cylinder head, the cooling tubes feature an internal passageway for high heat capacity coolants. The pressure of the liquid medium is set at close to the compressor’s operating pressure to minimize the thickness of the cooling tubes. The piston’s compressor features internal bores to accommodate the space of these cooling tubes. A small gap is left as to prevent any friction between the piston’s female bores and the cooling tubes. As the piston reaches the top of the cylinder assembly, the gas is squared tightly between the female bores and the male cooling tubes, allowing extremely rapid heat transfer into the cooling medium.
Even with the high density cooling tubes, there is sufficient space on the cylinder head for gas exit, since the density of the compressed gas is so much greater than during the inlet stroke, the valves can be quite small. During the intake stroke, a wall-valve similar to a two-stroke is used, or a long residence time can be used. A series of small valves are placed at the top of the cylinder assembly between the extending cooling tubes. In the piston assembly, it would be possible to also accommodate small cooling channels in the space between the female bores. To minimizes the thickness of the metal, it would be desirable to also keep the pressure of the coolant as high as possible. Higher pressure also raises the boiling point of the liquid cooling medium. Water has a boiling point of 375 degrees Celsius at 225 bar, this forms the working principle of the famous pressurized water reactor.
With this iso-thermal compression concept, it would be possible to achieve a complete atmospheric to ammonia-synthesis ready 300 bar in a single compression stroke, massively improving the flow capacity and productivity of a single compressor. Due to the fact that the surface area of the cooling tubes is quite high due to their spacing count and relative small size, the total potential thermal flux is immense. The limiting factor would not be metal surfaces, but the cooling medium which would have to be pumped at a high enough flow rate to purge the heat from the gas compression.

iso thermal 1

compressor 2

Lethal high voltage self-defense gloves


Pochari Technologies has developed a novel lethal self-defense weapon. This is not a “stun gun” or any other ineffectual “non-lethal” self-defense system which uses electricity. Rather, this is a viable alternative to heavy metallic firearms that shoot supersonic lead projectiles to inflict lethal damage and dangerous individuals. This technology uses no kinetic energy to inflict damage, it uses the power of alternating current to instantaneously neutralize an attacker. This technology is a more elegant, efficient, and simpler way to neutralize dangerous individuals. The core of Pochari Technologie’s AC self-defense suit is the electric vest and electrode-forming gloves.
The weapon forms a mono-piece dielectric vest that is worn underneath normal clothing, the vest is attached to gloves to form an electrode. The basic working principle of the weapon is that the user employs his hands to grab the arms, neck, or legs of the attacker to cause instant paralysis through electrocution while protecting himself from the now-electrically charged victim by wearing dielectric clothing underneath innocuous-looking clothing. This weapon system can be thought of as strategically “feigning vulnerability” as opposed to technologies like open-carrying a conventional firearm, with the intention to lure an attacker as opposed to using strategic deterrence.

Each glove is fitted with small electrically conducting metal strips which carry 120 Hertz AC current at a voltage of 400 V, 120 Hertz is chosen because it is the most dangerous frequency for the human heart. The device provides enough current to pass through from a small lithium-ion battery pack into the gloves to cause instant fibrillation, neutralizing an attacker. One of the gloves is positively charged while the other glove is negatively charged, current will pass from the positive to negative conducting directly through the major blood vessel connecting the heart, blood is especially conducive due to the high sodium content in the body. The user can also have a separate electrode attached to his foot, a metallic assembly can be placed on the tip of the user’s shoes, allowing him to use one hand to grab the victim’s arm and his foot to create a circuit that will pass through the heart.
Alternating current is so lethal that as little as 100 milliamps over a period of 3 seconds is enough to cause death. At 400 volts, this corresponds to 40 watt-hours, over a period of 3 seconds, only 33 milliwatts of electrical energy is expended. A typical iPhone has a battery providing 10 watt-hours of energy, enough to run the device non-stop for 20 minutes, or enough to cause 300 lethal fibrillation. In comparison to a bullet, the energetic efficiency is far higher.
If the attacker is wearing thick clothing, the voltage can be stepped up to 1000 or more, but a higher voltage poses a greater risk to the user since a stronger dielectric suit is needed. Since the attacker may have thick clothes on, such as a leather jacket, the glove-electrodes can be fitted with retractable spikes that puncture thick clothing and reach the skin underneath, allowing the current to freely pass through. These very small spikes can be activated using a mechanical spring mechanism with automatic activation using pressure sensors. When the pressure sensors reach their designated threshold of tension, the current is allowed to flow through after the spikes have been released. When the pressure is relieved, the pressure sensors respond and the spikes retract and current flow stops preventing the user from accidentally electrocuting himself by touching his face. This is all within the realm of 21st-century technology, including sophisticated stress and strain gages used in testing applications that can detect subtle modulations in pressure. Only 200 volts are needed to penetrate human skin and provide a lethal current through the heart, so in most instances where the attacker is wearing ordinary cloth clothing or has exposed skin (wearing a T-shirt or shorts), the user will have no difficulty passing a deadly current into the victim. To illustrate the danger of alternating current, if a person touches a standard European 220-volt plug with both their hands forming a circuit crossing the heart, death is virtually guaranteed unless rapid defibrillation can be provided on the scene, this is because the electricity causes muscle paralysis, causing the victim to clutch on to the electrical source providing enough time for lethal fibrillation to occur.
The tactical advantage of this weapon is its inherent stealth, namely that the prospective attacker does not suspect the user to be wielding a conventional firearm or knife, this causes the attacker to become complacent and cocky, compelling the prospective attacker to approach the victim and place himself in a vulnerable position, which then allows the user, provided the user is trained and is strong enough, to grab at least two body parts, such as arm or leg, or ideally, both arms, and pass a current through the heart of the attacker, virtually instantly killing him. Even if the attacker has a gun, the user can feign non-resistance and allow the attacker to approach him by placing his hands in the air, as the attacker approaches carelessly, the user can then grab two body parts of the unsuspecting attacker to introduce a lethal current, causing him to be unable to fire the gun as paralysis is instantaneous. In the event of an unarmed attacker who poses merely a physical threat such as wrestling or punching the victim, once the user can mount enough resistance to grab onto the attacker’s body parts, ideally torso and arms, the user can introduce current and thwart the attacker. Immediately after currents enters the body, it causes paralysis, preventing the attacker from letting go and fighting back, after 1 second at 900 milliamps, the victim will experience heart failure unless a defibrillator is available and applied quickly.
The user of the electrical glove self-defense apparatus is protected from cross-current flowing from the electrocuted victim through his cover-all high strength dielectric vest which provides protection against thousands of volts, far more than would be used for this weapon. The user also wears dielectric soles to insure current cannot from the user touching his neck to his feet.
If the attacker has a firearm pointed at the user, the electrical current will not flow from the firearm to the user unless the dielectric suit is punctured. The dielectric suit can be constructed out of PTFE, polyester, and Nomex, with only 0.15mm needed for a breakdown strength of 7 kV.
This extremely powerful technology allows people living in dangerous parts of the world. Every human being has an inalienable right to defend their life and safety, and if governments cannot guarantee this to their citizens, it is their responsibility to ensure their safety without needing to resort to risky firearms that are sometimes banned or difficult to access. You might be asking, “But you’re saying I need to wear some special suit and high voltage gloves, isn’t that impractical when I can carry a Glock? Sure, you can say this, but not everyone in the world has access to a Glock, and nor are they always the best option for self-defense for a very simple reason: firearms require preemptive action, surmising a person’s intent before they act, in other words, anticipating the action of an attacker which makes the legal case of self-defense more blurry. What if the guy is just crazy, does he really want to attack me, I don’t know if he has a deadly weapon? these are the kinds of questions that make so many police shootings controversial, but what if we had something that was only lethal once an attacker has committed his actions? This technology is designed so that the attacker first has to confront the victim, for example, try to rob or mug him, and then physically try to overpower him, by allowing himself to enter into a physical altercation, he is proving intent to harm, this allows a legal precedent for lethal self-defense to be incontrovertible as opposed to preemptive fire with conventional handguns of an approaching potential attacker. Clearly, under an ideal scenario, we should be able to use preemptive self-defense, but governments fear this as it would be destabilizing and “discriminating”. Aside from civilian self-defense which was the impetus for its invention, the technology would also have prospective customers from state actors, such as police, SWAT, special forces, or special assassination teams working for intelligence agencies who must carry out targeted and covert assassinations. Since the weapon system generates no noise whatsoever, it can be used where firearms are not suitable, for example, civilians cross-fire risks. The weapon also requires no metallic components if the battery is metal-free (oxides only, leaving no magnetic trace), so it can be potentially designed to evade metal detectors, allowing its use in special operations. The electrodes can be constructed out of high conductivity carbon materials as opposed to conventional metal conductors such as aluminum, steel or copper. There is a concern the technology would be attractive to bank robbers and other assorted criminals, but aren’t firearms too? Every weapon that has a good side to it, that is guaranteeing the safety of decent people, can also be used for sinister purposes, Pochari Technologies assumes no responsibility for sinister end-use scenarios. I anticipate, once commercialized and all the minor manufacturing details sorted out, which could take decades, that governments would try to ban it like anything they consider threatening to their tyranny. But the nice thing about this technology is that it is extremely easy to obfuscate, there are no tell-tale signs, such as gun powder, lead, brass, and of course metal firearm parts, which are more easily recognized and traceable.

Hydrostatic light-weight truss technology

Compressed by jpeg-recompress

A Hydrostatic truss uses pneumatic tubes in place of a conventional compression-loaded member.
Pochari Technologies is spearheading the pneumatic structure revolution. Although totally unknown to the public, it is possible to design structures that are far less material-intensive using internal pressure to carry loads. This elegant principle of loading a column in compression while not carrying the load in the walls through internal pressurization is extremely powerful and possesses a myriad of diverse applications. While inflatable structures have an impressive track record of providing extremely lightweight and rapidly deployable structures, they have yet to be used to achieve a high degree of rigidity. What is needed is a paradigm change in the nature of inflatable structure technology, this paradigm change is moving away from monolithic or monocoque style canvases that are only filled with air forming at best a big balloon. What is called for is a highly rigid member, as rigid as a steel column, but whose weight is a mere fraction.
Pochari Technologies is improving upon this technology by using pressurized cylinders to achieve the dynamic and structural properties of “pseudo” rigid members which can be used for both towers (*see page 1 on the drop-down menu), and to build trusses for horizontal load-bearing applications such as bridges or beams.
The impetus behind the design is the need to exploit the highly appealing material properties of many composite fibers (kevlar/aramid/glass fibers/vectran) which despite possessing poor compressive strength, boast immense tensile strength. If these fibers are loaded in hoop stress only, columns can be designed that bear load through transferring force onto the end pistons only. These rigid columns can then be assembled into a truss-like structure.
The weight of this truss technology compared to classic steel or even aluminum is greatly reduced, paving the way for all sorts of novel applications.

Self-erecting craneless hydrostatic levitation tower technology for wind turbines and cell towers

Created using Luxion Technology (


CodeMeterCC 6242022 92732 AM.bmp


drawing 2-1

Method to improve the load capacity of high altitude guyed towers using hydrostatic force to eliminate buckling

Christophe Pochari, Pochari Technologies, Bodega Bay California.

Preface: Energy is by far the most valuable asset after human capital for a nation. The entirety of modern industrial civilization is predicated on the continued flow of high caloric value inputs, without which no modern technological society could sustain itself for more than mere months. Unfortunately, the planet contains only small quantities of highly concentrated energy, most of the energy available on earth is in a diffuse form, highly dispersed across its surface as downstream solar energy. The caloric value from the heat of decaying radioisotopes and residual mantle heat is extremely weak at depths available to present drilling technology. The highly concentrated energy is principally in the form of gaseous and solid carbon-hydrogen compounds, with oils forming only a small percent. Of all the calorie-emitting compounds in the crust, 100% of it is in the form of carbon and hydrogen, there are no other heat-emitting molecules that we have access to for energy. Carbon is found in the upper crust at a concentration of roughly 0.02%, only 2.38 times higher than the concentration of nickel, an expensive metal, and 4.75 times less abundant than manganese, a moderately expensive industrial metal. Carbon is also rarer than strontium and barium, hardly elements abundant enough to burn ad libitum. Worst yet for man’s energy predicament, most of the carbon in the crust is in the form of carbonate rock, limestone, and dolomite, highly oxidized states with no caloric value to speak of. It’s estimated that of all the organic carbon on earth, only 0.01% is in the form of hydrocarbons within the sedimentary rocks. While the theoretical quantity of hydrocarbon is massive and represents thousands of years of present consumption, twenty trillion as some have estimated, the tiny fraction is amenable to extraction renders this initially impressive number far to a far more meager one. Man consumes around 4.5 billion tons of oil annually, 3.8 trillion cubic meters (2.6 billion tons) of methane, and 8.6 billion tons of coal, for a total of nearly 16 billion tons of hydrocarbon annually, or 1000 years of present consumption if we take the estimate of two times ten to the thirteen tons of hydrocarbon as a baseline. Most of the crustal carbon is in an oxidized state-bound up with oxygen, offering no energetic value. A small fraction of this 0.02% carbon concentration is in the form of energetic highly reduced molecules, the valuable hydrocarbons that man profusely mines for. Prosini estimates the total reserves of hydrocarbon to be nineteen quadrillion, but of course, these estimates are silly and for intellectual curiosity only, since if all the hydrocarbons were to be combusted, there would be no oxygen left for life on earth and carbon dioxide levels would reach unliveable proportions. If we assume ten percent is practically extractable, there are only 100 years left. Clearly, mankind must get busy developing depletion-free energy technologies. While the estimated reserves of methane hydrates in the arctic seabed are immense, numbering the thousands of years, no present extraction scheme has been proposed. While it might seem at the present moment as if there is ample hydrocarbon available, one must not forget that hydrocarbons are by no means inexpensive anymore, and regardless of what your personal opinions on climate change are, the so-called “energy transition” is not misguided at all. Critics of alternative energy often adduce the cost advantage of methane or coal over photovoltaic or wind generators, but the reality is quite a bit more complex and case-specific. 

If we take the current spot price of methane, which in the U.S is worth $6.5/1000 cubic feet, the notion that natural gas is a “cheap” source of power is actually incorrect. There are 35.4 cubic feet in a cubic meter, and the density of methane is 0.71 kg/m3, so there is approximately 20 kg in 1000 cubic feet, yielding a price per kg of $0.32. The caloric value of natural gas is 44.1 megajoules per kg, or 12.21 kWh. The average efficiency of a dual-fuel diesel engine with pilot fuel injection is 40%, and a medium-sized gas turbine is 35%. Using the gas turbine, we can produce power for 7.5 cents per kWh, which might seem like a low number for consumers, but is hardly cheap compared to hydropower or nuclear fission. Using coal, the energetic cost-effectiveness is only marginally superior to methane. Newcastle coal futures have historically traded at $150/ton, historically speaking refers to the past decade. In recent months, coal futures have jumped to over $400/ton due to a convergence of circumstances, but principally growing electricity demand. Unfortunately with coal, highly efficient Brayton and Diesel cycles cannot be exploited, leaving marginally efficient Rankine cycles as the only option. Most steam turbines are under 30% efficient unless in the supercritical class, where CAPEX becomes an increasingly dominant factor. Additionally, the cost of the steam turbine and boiler pr kW is far higher than a gas turbine due to its much lower power density, hence a greater material intensity. Steam turbines’s high-pressure blades must be constructed from nickel alloys, and while the amount of nickel per kW is insignificant steam turbines would use more scarce metals per kW than a wind turbine. Coal has an average caloric value of 32 MJ/kg, 8.86 kWh per kg. At $150/ton, the Levelized generation cost is therefore 5.6 cents excluding boiler, turbine, and condenser CAPEX. When these systems are factored in, an additional cent and a half can be added. Steam turbine price estimates can be sourced from online marketplaces like Alibaba, which are usually very accurate estimates of real-world wholesale market prices without the “fluff factor” of inflated retail Western prices. Dongturbo Electric Ltd retails a 1000 kW condensing steam turbine for $250,000-300,000. These medium-sized condensing turbines consume 15 kg of steam per kWh. The inlet pressure is 2.1 MPa and the inlet temperature is 300 C. The CAPEX of the attendant coal boiler is approximately $50,000. Since we are comparing to a wind turbine, we can exclude the cost of the synchronous generator, since a steam would require a synchronous generator as well. The cost per kW for the 1000 kW coal powerplant so far is around $300. A steam turbine can expect last over 250,000 hours, but a realistic amortization is 150,000 hours before a major overhaul is needed. Using this number, a negligible additional 0.2 cents is added per kWh, highlighting the disproportionate fuel share of LCOE.

The conclusion of this brief analysis of the state-of-the-art thermal hydrocarbon powerplants suggests a lower limit of 5 to 6 cents per kWh is achievable. Coal is unlikely to fall below $150 per ton in the foreseeable future as electricity demand from countries like India which are under pressure to provide growing power for urban air conditioning.

In conclusion, until someone successfully overturns the 1st or 2nd law of thermodynamics, man is forced to scavenge tirelessly with his various contraptions to harvest the biosphere for every vestige of calorie-emitting material. As civilization advances and spreads, the demand for caloric value increases strongly, and if this caloric value cannot be met, a lower standard of living has to be accepted. Daniel Sheehan in San Diego has claimed the ability to generate heat from thermal equilibrium, but since the paper was written in 2019, there is no evidence of a 3rd party prototype conclusively demonstrating useful work from the equilibrium. Since Sheehan’s discovery has the chance, albiet small, to utterly transform civilization as we know it, effectively destroying the entire thermal energy as well as non-thermal energy sector, ushering in a period of unprecedented geopolitical turmoil as a consequence lost oil and gas revenue in the middle east, even if we conclude the odds of his discovery being genuinely a 2nd law violation are low, it would be remiss to ignore it.

Sheehan’s Maxwellian Zombie, fact or fiction?

Enlightened men believe the last invention man would make is how to harvest energy from the thermal equilibrium, namely the energy from our surroundings. Whether it is through the supposed zero-point vacuum radiation through the Casimir effect, or through manipulating the flow of heat and entropy in a way that obviates the 2nd law.
In the late 1990s Daniel Sheehan proposed a new kind of Maxwellian demon, but rather than a demon, it would be called a zombie. It would be not an intelligent creature, but a lumbering brute. Rather than processing inordinate amounts of information on the trajectory of surrounding molecules, it would perform a more banal task of exploiting a niche within a closed system without having to harvest information, by using brute force, it would prevent molecules from ever colliding with each other, forcing a disequilibrium to manifest. This is critical and the only why a temperature differential can be achieved to begin with, without maintaining a sufficient gap between molecules, collisions will occur rapidly leading to equilibrium. This necessitates a vacuum to maintain for the system to function. Sheehan’s proposal in hindsight is genius, and perhaps we should feel stupid for not thinking about it. Maybe we can reassure ourselves that it’s the fault of the thermodynamic establishment for forbidding us from transgressing the sacred principles that work can never be yielded from a single heat body.
Sheehan’s zombie makes use of a metallic catalyst, comprised of refractory metals, metals that have extremely high melting points, namely tungsten and rhenium. The apparatus works in a strong vacuum within a closed chamber, hydrogen gas is introduced into the vacuum and heated to 1500 K. It has been known since the early 1900s that hydrogen, a diatomic molecule, will dissociate at elevated temperatures, around 1900 K and recombine upon a lowering of the temperature or in the presence of a catalyst which induces recombination.
The key for this zombie to work is fine-tuning the dissimilar catalysts to achieve a strong enough differential of absorption and desorption of atomic and diatomic hydrogen on their surfaces. Rhenium is four times more efficacious at disassociating diatomic hydrogen (which absorbs energy) than tungsten, so it will potentiate the reaction and cool relative to the tungsten which performs the recombination (which releases energy).
If a heat engine or thermoelectric generator is placed in the reactor, will that drawing of heat halt the reaction? If heat is drawn from the reactor that is initially heated and then runs in steady-state operation with thermal insulation, there is enough initial energy trapped in the insulated reactor to maintain the activation energy for splitting the diatomic hydrogen. At 300 K, it takes 400 kJ to break the hydrogen bond, so presumably, in a thermally hermetic chamber, this heat can be preserved almost indefinitely. Like a flywheel that is spun to high speed in a vacuum, it appears as if it’s a perpetual motion machine when in fact it is simply not losing any speed to friction, the initial energetic input will manifest forever until it is drained out to produce work. That would be the main concern with Sheehan’s epicatalytic diode. In the age of power-hungry data centers, it is hard to believe Jeff Bezos would not be knocking loudly on his door, unless of course, Jeff Bezos is not academic enough to spend his time thinking about how to get around the 2nd law. It looks like Kelvin, Clausius and Planck stand firm, and as long as we are, we are in business, and man will be contriving ever bigger and more elaborate contraptions to squeeze every molecule of heat and kinetic energy from this earth. If Sheehan is correct, then we are going to retire and surf for the rest of time since there will be nothing left to do, at least for us in the energy business, energy will be free, and the utopia will have arrived. Until then, let us continue discussing wind turbines and the energy predicament.

The 21st century can be characterized as a perfect storm between runaway hydrocarbon demand and concomitant downstream reverberations from their combustion beginning to be being widely felt. Not only due their combustion emit nitrogen-oxygen compounds, which react with volatile organic compounds, isoprenes, terpenes, and form ozone (trioxide). Even worse, the carbon dioxide molecule absorbs infrared in the 4 and 20-nanometer range, and may cause an additional radiative forcing or equivalent insolation of 1.5 watts per square meter. This increase in equivalent insolation has been highly controversial, but it is reasonable to expect it to increase evaporation intensity, which may disrupt macroclimatic stability. Sea level is a potential concern, while there is not a grave reason to fear sea level at the present moment, but there is a more than acceptable chance it may accelerate in the near future due to so-called “threshold effects”. Sea level rise has been remained remarkably steady dring the post industrial revolution era, but this is not what should be predicted if the greenhouse gas theory is correct, since GHG emissions have increased rapidly in the past half-century, and even more rapidly with the recent industrialization of China. Regardless, whether or not GHS emissions are as catastrophic as predicted by certain climate models, it is worth hedging the future viability of civilized life through the use of hydrocarbon-free power.

In light of these needlessly stated conditions, modern civilization demands a host of new options as substitutes for the energy of yore.

In order to provide this depletion and externality-free energy source, we have identified a method to make available one of the most abundant natural sources of energy, dilute solar energy in the form of pressure gradients: wind.

Unfortunately, wind velocities rapidly decay as the wind comes into contact with the earth’s surface, leaving only relatively slow wind speeds at the heights commonly associated with extant wind turbines. Another rather sobering factor has been the vitiating in wind velocity due to global greening (a corollary to rising CO2 concentrations), the increased surface roughness caused by a growth in shrubs and trees has caused wind speeds to decay closer to the surface, but not at higher into the atmosphere.

A few points we believe are important to highlight: 

#1: The impetus for developing hydrocarbon substitutes should be motivated by a combination of resource and atmospheric/climatic concerns, but it should remain balanced and level-headed. The fact is hydrocarbons are naturally becoming scarcer and costlier, and with enough time, will be depleted. But it would also be remiss to ignore the negative externalities imposed by their combustion, through a combination of weak greenhouse gases, such as carbon dioxide, and stronger greenhouse gases, nitrous oxide, and methane, the continued combustion of hydrocarbons may be regrettable and compromise the stability of the macro-climate. The doubling of carbon dioxide in theory will raise global temperature by around one and a half degrees, but there is still debate as to the exact sensitivity of the CO2 molecule since, unlike methane which has a wide absorption band, CO2 only absorbs infrared in the 4 and 20-nanometer range, it is opaque to the rest of the frequency band. 

Energy development should not necessarily be driven entirely by policy alone, which may not perform the necessary selection for performance and financial viability, development rather should be based on a combination of market forces and societal concerns that should compel the adoption of more competitive technologies. Wind energy, or any alternative energy technology, must succeed on its own, without subsidies, promotion, or favorable treatment. The technology should succeed and proliferate based on its intrinsic attributes, and these attributes should in part suggest an innate superiority, whether in cost-effectiveness or longevity, or environmental cleanliness over contemporary hydrocarbon technologies. If these attributes are not met, there is no rationale for their deployment, regardless of their social attractiveness on non “hard” metrics. In other words, we should not develop technologies that are less effective than hydrocarbons, unless they possess other attributes that compensate for their relative deficit. We are making the argument high-altitude terrestrial wind is a superior form of primary energy generation even compared to thermal technologies. The arrow of technology has pointed in a single direction in the history of civilized man, and this direction is towards ever exalted forms, more potent forms, and more intensive and expansive forms. Technology rarely regressed backward, and such a condition would be greatly lamentable. 

#2: The term “renewable energy” should be dispensed and replaced with the term “natural energy harvesting”, since no technology is renewable according to the strict definition of the word. While it is true that the steel and copper used in a wind generator can in theory be recycled indefinitely, there are still nevertheless geo-metallurgical and processing limitations that cap the scalability of all human technologies and place an upper limit on recyclability

#3: We must be willing to diverge from the design dogma in the present industry, such as the emphasis on the use of glass fiber over metallic blades, or the use of multi-megawatt scale as opposed to high densities of sub-megawatt turbines. 

#4: We must stop futilely trying to force wind energy or any spasmodic source to be merged into the power grid. Present-day electrical girds are designed for a variable but predetermined controllable flow of current. Current is modulated only above the so-called “base-load” using variable output thermal engines throttled according to temporal demand conditions. AC grids require a constant frequency of 50 or 60 cycles per second depending on the country, a wind turbine will produce without a rectifier a variable frequency alternating waveform without the use of doubly fed induction drives. When the turbine yields more current than can be consumed by the grid, energy is shunted and lost forever. Since it is unlikely we can meet these stringent exigencies imposed by the AC grid, we should look for other options. Rather than force grid integration, these spasmodic wind sources should be deployed where a certain degree of variability can be more easily tolerated. For example, spasmodics can be used to cut present hydrocarbon consumption by producing hydrocarbon-intensive chemicals, such as hydrogen for ammonia, methanol production, hydrocracking, caustic soda, aluminum, electroplating, titanium production, and steel recycling using electric arcs, and many other electricity-intensive or hydrogen intensive processes. What distinguishes these processes is their ability to absorb variable power, while grids struggle to absorb isochronous current without massive storage banks and or stability issues. Every joule of energy saved by avoiding hydrocarbon consumption in these sectors is more energy available elsewhere, or a reduction in emissions. A key competitive advantage afforded by any spasmodic power source is its ability to produce storable, energetic compounds that are conducive to transportation and storage and on-demand reversibility into calorific value. 

Introduction and motivation:

Wind harvesting technology has remained almost entirely unchanged since the days of the German Growian, American MOD series, Danish Nibe A, and Italian Gamma 60, among others. Harvesting power from the wind is not a “boondoggle” by any means if engineered properly, if geography is appreciated, if grid connection and frequency modulation is circumvented, and structural efficiency is optimized. At an altitude of 10+ meters per second, more acreage is available onshore only than the world’s total energy consumption by many-fold.

Fullscreen capture 732022 120702 PM.bmp

Map from the excellent “Global Wind Atlas” by DTU, judging from this map it is clear there is absolutely no need to bother building turbines in the corrosive ocean with all the attendant foundation, electrical cabling, and installation challenges. It is our opinion that offshore wind is completely ridiculous in light of high-altitude terrestrial technology, since after all, the whole idea of offshore wind is to tap into high-velocity regime, but since we can get those same speeds on land by simply go up a few extra meters, one has to seriously wonder anyone would even attempt to subject themselves to the ferocity of mother nature’s oceans when you can take to the safety of land. In fact, there is more land available for turbine installation than there is of suitable shallow waters, the total area of waters where the depth is less than 100 meters, which is the practical limit for foundation installation, is relatively small. Additionally, fishing vessels risk colliding into the turbines at night during storms. A guyed wind turbine takes up virtually no precious farmland unlike conventional wind turbines, since the tower base is much narrow and the silo is placed entirely underground. The guy cables that exit the earth at a 55-degree angle do not impede operation at a farm since their spacing is very far apart allowing harvesting vehicles to pass freely through. High altitude terrestrial self-erecting turbine technology makes expensive offshore installation redundant and obsolete.

The oil crisis of the 1970s prompted the advanced nations of the world to embark on a path of alternative energy development, probing into the technical feasibility of large-scale wind installations, concentrated solar, and photovoltaic. This was amidst a growing disillusionment toward nuclear fission, a combination of growing environmental fears and cost-overruns served to kill most of the Panglossian predictions made during the 1950s. This concerted effort to identify a viable alternative to hydrocarbon has only recently been surpassed in recent years over fears of greenhouse gases, rather than resource depletion. 

In 1974, the DOE commissioned the “Project Independence” report which studied a multitude of different wind turbine configurations, including a two-bladed turbine with a mast as high as a thousand feet. In 1975, NASA contracted out blade manufacturing to Lockheed and installed the in Sandusky Ohio. In 1978 Boeing was contracted to scale up the Mod-O with the Mod-1 and Mod-2. In 1976 Germany under the Federal Ministry of Education and Research tasked MAN SE with building a megawatt-scale wind generator called the Growian 1 and 2. In Denmark, extensive research and development was taking place with the ELSAM Nibe series of turbines in Denmark, and in Italy, 

Continue reading

U.S ammonia prices stabilize to $1500 per ton

Natural gas prices have stabilized to just over $4/MBTU since August 2021, sending anhydrous ammonia prices to over $1400. As of the 13th of January, anhydrous prices in the U.S have reached $1486/ton, up from $1434 from December. With this price, autogenous ammonia production using advanced high temperature microporous insulated reactors feeding from photovoltaic facilities could yield payback times of less than two years. The innovative capacity of human civilization will be tested in this new environment of artificially elevated nitrogen fertilizer prices.

Fullscreen capture 1152022 103650 AM.bmp


Powerplant: Horizon PEMFC 50% ef 5 kw/kg, 3 hp/lb

Gross weight: 10,500

Empty weight @55%: 5,775

Drag: 800 lbs

Power: 520 hp

Hydrogen consumption: 23.3 kg/hr

Propeller diameter: 8.75 ft

Number of propellers: 2

Power loading: 5.5 hp/ft2

Power-thrust ratio: 5.15 @sealevel, 1.54 @FL280

Cruise speed: 320 MPH

Range: 4700 miles

Endurance: 14.68 hours

LH2 tankage drag thrust penalty: 20 kg LH2

LH2 fuel weight: 342 kg

Tankage @35%: 119 kg

Fuel weight at 2000 miles: 154 kg

Volume: 170 cubic feet

Block fuel weight: 1017

Payload: 4725 lbs

Net Payload 4700 mi: 3700

Net payload 2000 mi: 4285

Equivalent jet fuel weight with PT6-A67B: 5390 lbs

Cargo cost for 7000 miles: $0.85/kg @ $2/kg LH2

Conventional air freight pre-Covid average Hong Kong North America: $3.5/kg

Cost advantage: 4x

UAV manufacturing cost: $1,500,000

Airframe life based on 757: 100,000 hours

Hourly airframe cost: $15

Galton reaction time paradox solved.

Christophe Pochari, Pochari Technologies, Bodega Bay, CA

Abstract: The paradox of slowing reaction time has not been fully resolved. Since Galton collected 17,000 samples of simple auditory and visual reaction time from 1887 to 1893, achieving an average of a 185 milliseconds, modern researchers have been unable to achieve such fast results, leading some intelligence researchers to erroneously argue that slowing has been mediated by selective mechanisms favoring lower g in modern populations.

Introduction: In this study, we have developed a high fidelity measurement system for ascertaining human reaction time with the principle aim of eliminating the preponderance of measurement latency. In order to accomplish this, we designed a high-speed photographic apparatus where a camera records the stimuli along with the participant’s finger movement. The camera is an industrial machine vision camera designed to stringent commercial standards (Contrastec Mars 640-815UM $310, the camera feeds into a USB 3.0 connection to a windows 10 PC using Halcon machine vision software, the camera records at a high frame rate of 815 frames per second, or 1.2 milliseconds per frame, the camera uses a commercial-grade Python 300 sensor. The high-speed camera begins recording, then the stimuli source is activated, the camera continues filming after the participant has depressed a mechanical lever. The footage is then analyzed using a framerate analyzer software such as Virtualdub 1.10, by carefully analyzing each frame, the point of stimuli appearance is set as point zero, where the elapsed time of reaction commences. When the LED monitor begins refreshing the screen to display the stimuli color, which is green in this case, the framerate analyzer tool is used to identity the point where the screen has refreshed at approximately 50 to 70% through, this point is set as the beginning of the measurement as we estimate the human eye can detect the presence of the green stimuli prior to being fully displayed. Once the frame analyzer ascertains the point of stimuli arrival, the next process is enumerating the point where finger displacement is conspicuously discernable, that is when the liver begins to show evidence of motion from its point in stasis prior to displacement.
Using this innovative technique, we achieved a true reaction time to visual stimuli of 152 milliseconds, 33 milliseconds faster than Francis Galton’s pendulum chronograph. We collected a total of 300 samples to arrive at a long-term average. Using the same test participant, we compared a standard PC measurement system using Inquisit 6, we achieved results of 240 and 230 milliseconds depending on whether a laptop keyboard or desktop keyboard is used. This difference of 10 ms is likely due to the longer key stroke distance on the desktop keyboard. We also used the famous online test and achieved an average of 235 ms. Using the two tests, an internet and local software version, the total latency appears to be up to 83 ms, nearly 40% of the gross figure. These findings strongly suggest that modern methods of testing human reaction time impose a large latency penalty which skews results upwards, hence the fact it appears reaction times are slowing. We conclude that rather than physiological changes, slowing simple RT is imputable to poor measurement fidelity intrinsic to computer/digital measurement techniques.
In compendium, it cannot be stated with any degree of confidence that modern Western populations have experienced slowing reaction time since Galton’s original experiments. This means attempts to extrapolate losses in general cognitive ability from putative slowing reaction times is seriously flawed and based on confounding variables. The reaction time paradox is not a paradox but rather based on conflating latency with slowing, a rather elementary problem that continued to perplex experts in the field of mental chronometry. We urge mental chronometry researchers to abandon measurement procedures fraught with latency such as PC-based systems and use high-speed machine vision cameras as a superior substitute.



Anhydrous ammonia reaches nearly $900/ton in October

Fullscreen capture 10102021 54638 PM.bmp

Record natural gas prices have sent ammonia skyrocketing to nearly $900 per ton for the North American market. Natural gas has reached $5.6/1000cf, driving ammonia to 2014 prices. Pochari distributed photovoltaic production technology will now become ever more competitive featuring even shorter payback periods.

A battery electric cargo submarine: a techno-economic assessment

A battery electric cargo submarine: a techno-economic assessment

*Christophe Pochari, Bodega Bay, CA

*Pochari Technologies

Abstract: An electric submarine powered by lithium-ion batteries with a cruising speed of under 8 knots is proposed for ultra-low cost emission free shipping. This concept, if developed according to these design criteria, would be able to lower the cost of shipping compared to conventional container ships. We believe this concept could revolutionize shipping. The relatively small size of the submarine allows it to avoid large ports, reducing congestion, and increase turnaround time, allowing the shipper to delivery directly to virtually any calm shoreline.

Due to the absence of resistance generated by the stern and bow waves, a submarine can sometimes be a more efficient marine vehicle than a conventional vessel, especially in rough oceans, as there exists a substantial parasitic load engendered by waves crashing in the incoming direction. Additionally, submarines are safer due to the immunity from storms, swells and rogue waves. Furthermore, current velocity diminishes rapidly with depth, so if currents are in the incoming direction, there is an additionally reduced parasitic load on the vessel. The risk of cargo losses are also minimized, as container ships routinely lose valuable cargo at sea.


The resistance at 6.2 knots is estimated to be 0.3 lbf/ft2 of wetted area, at 8.4 knots, it’s estimated to be 0.5 lbf/ft2. This means a submarine capable of carrying 950 cbm of cargo only needs a paltry 90 hp at 6.2 knots. The propulsive efficiency of large low vessel ship propellers is in the order of 25-30 lbf/shp. With 28 lbf/hp being a realistic estimate for this size submarine.

Henrik Carlberg at NTNU (Norway) estimated that a commercial submarine designed for oil and gas applications would use 165 kW at 6.2 knots with a wetted area of 1920 m2.

The submarine design featured here has the same wetted area, with net hull volume minus ballast tanks and battery volume of 2300 cbm.

This 2300 cbm cargo submarine would take 1000 hours to traverse 7000 miles and consume 215,000 kwh along the way, costing $6450 at $0.03/kWh, with a 3000 cycle Li-Ion cycle life OPEX of $3.4/cbm at $110/kW battery prices (2170 Panasonic), or $7800 per trip. The total cost per cbm would be $12.4/cbm ($930/40 ft container equivalent) including an empty trip back. Excluding the round trip, assuming goods are transported the other direction, the cost per 40 ft container equivalent would be $465, far below the pre-Covid price of $1500 (Freightos Baltic index) for existing bunker fuel powered mega-container ships. For such a small vessel, using non-hydrocarbon fuel, it is remarkable the cost is close to massive highly optimized container ships.

Construction costs for the submarine have been estimated at $5,000,000, with the steel structural materials costing $1,100,000 at a price of $800/ton. With a lifetime of 40 years, hourly CAPEX and depreciation is minimal.

The submarine would be unmanned, saving on crew costs, and would require no AIP as the use of a manned crew and air-breathing combustion propulsion is eliminated


Maximum displacement: 3,890,000 kg

Structure weight: 700,000 kg

Cargo volume: 2300 cbm (186 kg/m3 cargo density avg)

Wetted area: 1990 m2

Ballast volume: 530 m3: 543,000 kg

Battery at 200 wh/kg and 500 wh/liter (rectangular): 268,000 kWh (80% depletion): 1,340,000 kg: 536 m3

Total loaded weight: 2,850,000

Front and rear weights: 700,000 kg steel plates

Motor power: 270 hp

Length: 72.9 m

Diameter: 9.2 m

The Images below are sourced from Henrik Carlberg’s thesis on a commercial submarine for oil and gas. Skin friction estimates were corroborated with Martin Renilson’s estimate of 59,000 newtons for a wetted area of 1400 m2 at a speed of 9.7 knots. To adjust for our lower speed, a ratio of approximately 2.2 is found from the CFD analysis of Moonesun et al and Putra et al from 6 to 9 knots. Propulsive efficiency is calculated using the formula below.

Fullscreen capture 1032021 31434 AM

Fullscreen capture 1032021 60748 PM.bmpFullscreen capture 1032021 95153 PM.bmp


Click to access IJMS%2042%288%29%201049-1056.pdf