Posts filled under #turkeyhome

Bu yamur, bu yamur, bu kldan ince, Nefesten yumuak, yaan bu yamur. Bu yamur, bu yamur, bir gn dinince, Aynalar yzm tanmaz olur. Bu yamur, kanm boan bir iplik, Tenimde acsz yatan bir bak. Bu yamur, yerde ta ve bende kemik, Dayandka isil isil yaacak. Bu yamur, delilik vehminden stn, Karanlk, kovulmaz dncelerden. Cinlerin beynimde yapt dn, Sulardan, seslerden ve gecelerden...N.F.K#oneistanbul#turkeyhome#turk_kadraj#turkeyday#turkisshot#istanbulcity#istanbullife#istanbullove#turkeyhome#altinkare#ig_today#instagram#instagramturkey#gunbatm#sunset#follow#sunsetpics#azizistanbul#istanbulhdr#beatifuldestinations#earthfocus#bosphorus#ig_eurasia#ig_photooftheday#gununkaresi#turkiyedenkadrajlar#anadolugram#fotogulumse #ig_shotzbn#fotoderyasi#fineart_turkey

Kyklack / Milas... Begonvilsiz bir Ege, bir deniz kys, bir baheli ev, ay iilen bir ardak, sokaklar olmaz artk... ylesine renk katyor, gzelletiriyor, ylesine buraya ait ki "Begonvilsiz bir gn" bundan sonra dnelemez sanrm... Peki begonvilin hikayesini biliyor musunuz?... Begonvillerin anavatan Brezilyadr. 1700l yllarda Gney Amerikaya giden Fransz amiral Baron Louis de Bouginville tarafndan Avrupaya getirilmi ve onun adn almtr. Sonrasnda Avrupa'dan, Akdeniz iklimi grlen btn lkelere yaylmtr... #kyklack #milas #mula #yazkismugla#instamugla#turkeytrip#turkeyhome#hurriyetseyahat#homeof#gezlist#comeseeturkey#instamugla#turkeytrip

An extract on #turkeyhome

IEEE 754 specifies a special value called "Not a Number" (NaN) to be returned as the result of certain "invalid" operations, such as 0/0, 0, or sqrt(1). In general, NaNs will be propagated i.e. most operations involving a NaN will result in a NaN, although functions that would give some defined result for any given floating-point value will do so for NaNs as well, e.g. NaN ^ 0 = 1. There are two kinds of NaNs: the default quiet NaNs and, optionally, signaling NaNs. A signaling NaN in any arithmetic operation (including numerical comparisons) will cause an "invalid" exception to be signaled. The representation of NaNs specified by the standard has some unspecified bits that could be used to encode the type or source of error; but there is no standard for that encoding. In theory, signaling NaNs could be used by a runtime system to flag uninitialized variables, or extend the floating-point numbers with other special values without slowing down the computations with ordinary values, although such extensions are not common.

The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers. For example, the non-representability of 0.1 and 0.01 (in binary) means that the result of attempting to square 0.1 is neither 0.01 nor the representable number closest to it. In 24-bit (single precision) representation, 0.1 (decimal) was given previously as e = 4; s = 110011001100110011001101, which is 0.100000001490116119384765625 exactly. Squaring this number gives 0.010000000298023226097399174250313080847263336181640625 exactly. Squaring it with single-precision floating-point hardware (with rounding) gives 0.010000000707805156707763671875 exactly. But the representable number closest to 0.01 is 0.009999999776482582092285156250 exactly. Also, the non-representability of (and /2) means that an attempted computation of tan(/2) will not yield a result of infinity, nor will it even overflow. It is simply not possible for standard floating-point hardware to attempt to compute tan(/2), because /2 cannot be represented exactly. This computation in C: will give a result of 16331239353195370.0. In single precision (using the tanf function), the result will be 22877332.0. By the same token, an attempted computation of sin() will not yield zero. The result will be (approximately) 0.12251015 in double precision, or 0.8742107 in single precision. While floating-point addition and multiplication are both commutative (a + b = b + a and a b = b a), they are not necessarily associative. That is, (a + b) + c is not necessarily equal to a + (b + c). Using 7-digit significand decimal arithmetic: a = 1234.567, b = 45.67834, c = 0.0004 (a + b) + c: 1234.567 (a) + 45.67834 (b) ____________ 1280.24534 rounds to 1280.245 1280.245 (a + b) + 0.0004 (c) ____________ 1280.2454 rounds to 1280.245 <--- (a + b) + c a + (b + c): 45.67834 (b) + 0.0004 (c) ____________ 45.67874 1234.567 (a) + 45.67874 (b + c) ____________ 1280.24574 rounds to 1280.246 <--- a + (b + c) They are also not necessarily distributive. That is, (a + b) c may not be the same as a c + b c: 1234.567 3.333333 = 4115.223 1.234567 3.333333 = 4.115223 4115.223 + 4.115223 = 4119.338 but 1234.567 + 1.234567 = 1235.802 1235.802 3.333333 = 4119.340 In addition to loss of significance, inability to represent numbers such as and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur: Cancellation: subtraction of nearly equal operands may cause extreme loss of accuracy. When we subtract two almost equal numbers we set the most significant digits to zero, leaving ourselves with just the insignificant, and most erroneous, digits. For example, when determining a derivative of a function the following formula is used: Q ( h ) = f ( a + h ) f ( a ) h . {\displaystyle Q(h)={\frac {f(a+h)-f(a)}{h}}.} Intuitively one would want an h very close to zero, however when using floating-point operations, the smallest number won't give the best approximation of a derivative. As h grows smaller the difference between f (a + h) and f(a) grows smaller, cancelling out the most significant and least erroneous digits and making the most erroneous digits more important. As a result the smallest number of h possible will give a more erroneous approximation of a derivative than a somewhat larger number. This is perhaps the most common and serious accuracy problem. Conversions to integer are not intuitive: converting (63.0/9.0) to integer yields 7, but converting (0.63/0.09) may yield 6. This is because conversions generally truncate rather than round. Floor and ceiling functions may produce answers which are off by one from the intuitively expected value. Limited exponent range: results might overflow yielding infinity, or underflow yielding a subnormal number or zero. In these cases precision will be lost. Testing for safe division is problematic: Checking that the divisor is not zero does not guarantee that a division will not overflow. Testing for equality is problematic. Two computational sequences that are mathematically equal may well produce different floating-point values.