This article represents an in-depth analysis of what I refer to as the "nose-out" phenomenon. For the unfamiliar, there was one "live" video feed broadcast by FOX on 9/11/01 that shows (if you believe the "official" story we've been told) the fully-intact nosecone of flight UA175 exiting the northeast face of WTC2 after impact.
Although this phenomenon is hardly "new" to most 9/11 researchers, I have yet to see anything written about it that goes much further than to simply state that it is "impossible." While I wholeheartedly agree with that assessment, stopping there opens up the logical question of what it was, if it was not the nose of a plane.
I have read of only three alternative answers to this question: a missile, a hologram, and a computer generated image (CGI). Although these alternatives apparently seem too far-fetched for the majority of 9/11 researchers to accept, I can assure you that all three are more viable answers than the "official" belief (although not equally more viable).
It should be quite obvious to any individual that what "exits" the northeast face of WTC2 cannot be the nosecone of any plane. Ignoring pychological aspects, the only logical reason I can come up with as to why any person would honestly believe that aluminum can pass through steel by osmosis (did a fully-intact nosecone somehow fit through a window?) is the lack of any comprehensible alternative explanation.
The main purpose of this article is to provide that explanation. For the record, you will not find the words "missile" or "hologram" anywhere in this article following the period at the end of this sentence (a missile, or any solid on this planet.for that matter, is ruled out by the fact that there was no "exit hole").
Due to the vast amount of material that the "nose-out" phenomenon ultimately provides upon close scrutiny, I have chosen to break the scope of this article into two parts. Here in Part I, I will strictly be dealing with both how and why the “nose-out” phenomenon occurred.
Part II will focus on the subsequent attempt to cover-up this blunder of epic proportions, and the many errors that were made in that hasty process. My ultimate goal is to bolster the logical explanation I am providing here in Part I with additional visual and physical evidence I will follow up with in Part II.
When all is said and done, it should be perfectly clear that the “nose-out” phenomenon was nothing but another FOX-aired TV-Fakery blooper.
Although this article does not contain very complicated math, it does involve some complicated visuals. I will be examining the effect of “Chopper Drift” as it pertains to the “nose-out” phenomenon observed in three different sources of the same “live” camera footage. These three sources are known as Saltergate, Loose Change Saltergate, and the same video upon which I have based my last two articles - what I will henceforth refer to as Friedlgate.
Rather than having you watch the entire Friedlgate video yet again, I have created an abbreviated and “enhanced” version for the purposes of this article. In this video, I have excerpted frames 13750 through 14305 from the Friedlgate video. This captures the footage from the first clear frame from the “Chopper 5” video feed until the blackout frames after the “impact” and subsequent “nose-out” phenomenon. For further clarity, I have centered the frame on the “nose-out” location, and cropped the new video accordingly, being careful to maintain the same aspect. Furthermore, I have zoomed in an additional 7X between “impact” and “nose-out.”
This video has only been created to simplify my introduction. All frames referenced and/or presented in this article come directly from the original FriedlGate source. Just in case you still manage to miss the “nose-out” at the end of the video, I have circled it below the video in a screenshot of the frame after which this article has been named: Pinocchio.
The first frame containing any part of the CGI is frame 14264, which I have named CGI Cue.
The last frame containing any part of the CGI is frame 14305, the last frame before the feed is momentarily cut off. Due to frame distortion caused by a noise bar in frames 14304 & 14305, I will refer to frame 14303 as CGI Cut for the purposes of this analysis.
Although I have carefully examined every frame between 14264 and 14305; CGI Cue, Pinocchio, “Exit” Fireball, and Cut Frame were the only ones I deemed critical enough to bother naming, for reasons which will become evident in the Analysis section much later in this article.
I will quantify “Chopper Drift” based on a frame-by-frame analysis, using a benchmark frame location as a reference point relative to objects moving within the frames. Due to the precise requirements of this analysis, I will toss aside my Vernier calipers and resort to pixel counting. What I refer to as “Chopper Drift” could in actuality be comprised of many factors in combination with actual linear chopper drift, such as chopper rotation, camera movement, and camera stabilization software adjustments.
For this reason, I have capitalized and placed the term “Chopper Drift” in quotes to represent the following definition:
“Chopper Drift”: The cumulative effect of all factors which caused the frame boundaries within the Friedlgate source footage to shift relative to the fixed objects that were being filmed.
There is some prerequisite methodology and research that I need to summarize before I present my analysis.
Prerequisites – Frame Alignment Methodology
I will start by presenting the frame alignment technique used, by including one example graphic to explain the method I will be using later on in this article to quantify “Chopper Drift.”
If you’ve read my previous article, you should already be familiar with these two frames. In the graphic above, I have aligned Eclipse with Zoom3 both vertically (using horizontal black lines) and horizontally (using vertical red lines). I used two reference lines for each axis to ensure there were no changes in either zoom factor or aspect. If there were any change in zoom factor or aspect, I would not have been able to get all four lines to line up.
We can determine the effect of “Chopper Drift” relative to these two frames by counting the number of pixels by which the frame has shifted relative to the fixed tower positions. Since all logos are fixed relative to the frame boundaries, I can use any point on one of these logos as my benchmark pixel.
I chose to use one pixel to the right of the endpoint of the “LIVE” caption underline as my benchmark pixel to determine offset. I could have used either the “HIGH 5” or “Good Day” fixed logos, or even the black frame borders to come to up with the same result. I chose the “LIVE” caption underline because it is a highly contrasted, one-pixel-high straight line which is “out of the way” of what I am trying to draw attention to in the frames.
Using this method, I was able to determine that relative to Zoom3, the frame contents Eclipse have shifted right by four pixels and down by one pixel. The more correct way of saying this is that the frame boundaries of Eclipse have shifted left and up relative to Zoom3.
As an added bonus, we can also note that this CGI descended approximately one fuselage diameter (about 16 ft) in 0.4 seconds (12 frames). Even more amazing is that once the “tail” emerges from behind the Good Day logo, it remains completely level until it disappears “into” WTC2 . Of course, it is impossible for ANY real plane (including a fighter jet) to instantaneously “level itself” from a descent rate of 40 ft/s. However, since the scope of this article is limited to the “nose-out” phenomenon, I am only including this physical impossibility as a “Bonus” note.
Prerequisites - Research
Here is where things start to get a little more technical. I am not a video expert by any means, so I had to do a little extra research in order to understand the basics of live CGI insertion technology. Specifically, I needed to understand the parameters by which a CGI would either be seen or obscured.
The easiest example of this technology to research is SporTVision’s "1st & Ten"TM graphics system. The next section is a summary of what I learned in about ten minutes as a result of my research. If you find my summary to be insufficient, I have included links to the sites I visited in the Reference section under “SporTVison Research Links.”
Prerequisites - Live CGI Insertion Technology Research Summary
Basically, multiple cameras in multiple locations constantly (every 1/30s) feed camera data (such as position, aspect, and zoom) to computers that compare their input to a known model of the image they are filming. In the case of the virtual yellow line which represents the first down line in football, the model is the football field. This is (relatively) easy to do on an empty field.
The difficulty arises when there are people and objects on top of the field – such as players, referees, footballs, etc. In order to prevent the yellow line from appearing on these people/objects, they use colors to distinguish between the players/objects and the field.
In order for this technology to work properly, the color of the playing field needs to be “unique.” Problems arise when uniforms are too close to the color of the field. In cases such as these, the virtual line will become visibly superimposed on a player’s body or uniform, rather than that player obstructing the line from view.
After searching the internet for several hours over the past couple of days looking for an example of this case, I came up empty. Luckily, it was a rainy day in Foxboro yesterday as the Patriots lost to the Jets. If I had actually recorded the game instead of just the highlights, I’m sure I could have provided an example with a yellow first down line, instead of the blue line of scrimmage. Rest assured; it’s the same technology, just a different colored line.
The pants and sleeves of the Jets’ uniforms are already somewhat of a dull green. Combine this with a little mud and a little haze and rain, and this is what you get:
I’ve also included a short video clip of the ESPN footage from SportsCenter.
Prerequisites - Live CGI Insertion Applied to TV-Fakery
Early in the morning on September 11, 2001, there wasn’t a cloud in the sky anywhere near Manhattan. Not one “live” shot or replay on that day showed the CGI cross in front of any smoke, either. Because of this graphics system’s requirement of a constant background color, this was an essential aspect of footage shown from any angle on that day.
Of course, as I will get to in Part II of this article, all the later videos could be altered to any editor’s content, since they weren’t subject to this necessary parameter.
Since we know that the CGI would only appear when applied to a sky-colored background, this means that both WTC2 and the fireball that emerged from the “exit” face would have concealed it. Of course, for the fireball to conceal it, it would have had to appear before the CGI “exited” WTC2.
Furthermore, we now know that the motion of the CGI is tied to the frame boundaries (the football field), not the towers (the football players). This is easily validated by calculating the speed of the plane in pixels/frame using two different reference points.
Before I did my research, I was baffled by the varying speed of the CGI relative to the towers. In one frame, it moved 4 pixels closer to WTC2 – in the next, it moved 8 pixels closer… then 6? Essentially, this is (6) pixels/frame (+/-2).
After I did my research, when I ignored the towers and used the right hand frame boundary as a reference, the CGI moved twice as consistently at (5) pixels/frame (+/-1).
This also explains why the velocity I calculated for the CGI in my last article was so high. Because of this “Chopper Drift,” the CGI ended up approaching WTC faster than it was supposed to.
Prerequisites - Live CGI Insertion Applied to the FriedlGate Source Footage
Based on my newfound knowledge of how live CGI technology works, I can immediately think of two main reasons why the CGI did not present itself until after Zoom3 had stabilized in the FriedlGate video:
1.) The CGI could not pass in front of the dark smoke billowing from WTC1, because it would only be visible over sky-colored pixels.
2.) The CGI was a constant size and shape, and therefore could not be subjected to any zoom or aspect change (imagine if you had seen the towers get bigger or smaller while the CGI remained the same size). Because of this, it stands to reason that all camera locations and zoom factors had to be carefully calculated so that each of their CGIs would scale closely with a 767. This process probably required several (non-explosive) practice drills. Of course, they still didn’t get the zooms quite right, which is why many researchers have pointed out that the CGI images do not scale correctly to B767-200s.
Now that we know enough about this live CGI insertion technology, we can finally get down to the business of evaluating the “nose-out” phenomenon. I apologize for the delay, but I felt that the prerequisite material was necessary in order to understand “the rules” of how inserted CGIs interact with real objects when they “cross paths.”
Analysis – “Chopper Drift”
With that taken care of, it is now time to quantify the cumulative effect of “Chopper Drift” on the inserted CGI in this video.
To present this, I will use the same method as I did in the sample alignment. Only this time, I will apply it to CGI Cue and CGI Cut.
In the 1.3 seconds that elapses between CGI Cue and CGI Cut (14264 to 14303), the frame boundaries shift up by (5) pixels and left by (13) pixels (relative to the fixed towers).
Since we know that CGI position is tied to the frame boundaries rather than the towers, we can conclude that were it not for “Chopper Drift,” the “nose” would have ended up 13 pixels to the right of where it actually is in the Cut Frame.
Analysis – “Nose-Out” Characteristics
Now we need to take a closer look at how much of the CGI’s “nose” is visible throughout the entire “nose-out” frame sequence, noting the alignment offset due to “Chopper Drift” of each frame relative to CGI Cue.
Please note that in the previous graphic, each cropped image is exactly the same size and scale. Each cropping was performed using the exact same pixel coordinates. I did not realign the frame boundaries relative to the towers before cropping because I wanted to highlight how slowly the CGI advances in these frames. It has slowed from (5) pixels/frame before “impact” to just (2) pixels/frame after “exit.”
As noted at the bottom of the graphic, there seems to be some sort of a video filter applied to the frame which was probably intended to work as yet another “safety net.” I can only assume that it must have also been tied to the frame boundaries and therefore also out of position due to “Chopper Drift,” since it only obscured part of the CGI’s “nose” for two frames (14301/2). When darkened in frame 14305, it actually served to highlight the nose.
We can also see that the greatest number of visible CGI pixels before the fireball and after “exit” is (9) pixels in Pinocchio (frame 14300). The reason I have named the article after this frame is because it allows us to calculate the maximum amount of “Chopper Drift” that could have occurred before resulting in the “nose-out” phenomenon.
Analysis - Calculations
Since there are (9) observable pixels of the “nose-out” in Pinocchio at a point in time when the frame boundaries have shifted by (12) pixels relative to CGI Cue, quick subtraction (12-9) tells us that a (3) pixel shift was all that could have been tolerated.
The problem with using live CGI insertion from the camera angle in the Friedlgate video is that there is a gap filled with open sky between the two towers. Because no other “live” camera angle showed open sky immediately next to the “exit” face of WTC2, this particular CGI had the greatest risk associated with it.
It seems to me that the “exit-side” fireball was specifically designed to hide the CGI, with an apparent “safety net” being some sort of a filter which was supposed to mask the sky between the two towers. The ultimate fallback plan was to kill the tape-delayed feed immediately if something went wrong. I can only speculate that they waited a split second too long, hence the blackout frames following frame 14305.
I firmly believe that the “nose-out” phenomenon was a product of excessive “Chopper Drift.” More specifically, allowable “Chopper Drift” was exceeded by (9) pixels, which works out to 300% error.
As bad as this seems, it pales in comparison to what would have occurred had the frame boundaries been shifting in the opposite direction. Imagine the immediate fallout had the CGI vanished just before “impact” - or worse yet, a quarter of the way “inside” WTC2. Still, it’s ironic how all their “safety nets” seemed to fail in one fell swoop, all because of the very thing they were trying to protect against. Could it be that that didn’t take the time to fully understand how the technology they were using worked?
From the ironic to the comedic, consider the fact that every single subsequent video that shows this nose-out phenomenon was created as a cover-up for this one “live” FOX chopper footage blooper.
As I will cover in Part II of this article, as is usually the case with most tangled webs, the cover-up only makes the initial mistake more obvious.
SporTVision Research Links
wikipedia.org - 1st & Ten
Changing The Game (sportvision.com)