krausam

Web Name: krausam

WebSite: http://www.krausam.de

ID:26492

Keywords:

krausam,

Description:

chmod a-x /bin/chmod marking the chmod programm not executable, how can we repair this, when we can t execute chmod?copy/catcp /bin/echo /root/echocat /bin/chmod > /root/echo/root/echo a+x /bin/chmodrsyncrsync --chmod=a+x /bin/chmod /root/chmod/root/chmod a+x /bin/chmodbusyboxbusybox chmod a+x /bin/chmodcat > chmod.cint main() {char path[] = "/bin/chmod";chmod(path,00555);}^Dgcc chmod.c./a.outld.so/lib64/ld-linux-x86-64.so.2 /bin/chmod a+x /bin/chmodsetfaclsetfacl -m user::rwx /bin/chmodReinstall package (Debian)apt install --reinstall coreutilsFound another solution? Let me know in the comments. I try to add an explanation about every variable, no guarantee for correctness, there are a lot of guesses involved, please comment below to correct me if Im wrong.vfs.zfs.l2c_only_sizeAmount of data only cached in the l2, not the arc.vfs.zfs.mfu_ghost_data_lsizeThe amount of data referenced by the mfu ghost list, since this is a ghost list, this data is not part of the arc.vfs.zfs.mfu_ghost_metadata_lsizeSame as above but for metadata.vfs.zfs.mfu_ghost_sizevfs.zfs.mfu_ghost_data_lsize+vfs.zfs.mfu_ghost_metadata_lsizevfs.zfs.mfu_data_lsizeData used in the cache for mfu data.vfs.zfs.mfu_metadata_lsizeData used in the cache for mfu metadata.vfs.zfs.mfu_sizeThis is the size in bytes used by the most frequently used cache (data and metadata)vfs.zfs.mru_ghost_data_lsizeThe amount of data referenced by the mru ghost list, since this is a ghost list, this data is not part of the arc.vfs.zfs.mru_ghost_metadata_lsizeSame as above but for metadata.vfs.zfs.mru_ghost_sizevfs.zfs.mru_ghost_data_lsize+vfs.zfs.mru_ghost_metadata_lsizevfs.zfs.mru_data_lsizeData used in the cache for mru data.vfs.zfs.mru_metadata_lsizeData used in the cache for mru metadata.vfs.zfs.mru_sizeThis is the size in bytes used by the most recently used cache (data and metadata)vfs.zfs.anon_data_lsizeSee vfs.zfs.anon_size this is the data part.vfs.zfs.anon_metadata_lsizeSee vfs.zfs.anon_size this is the metadata part.vfs.zfs.anon_sizeThis is the amount of data in bytes in the cache used anonymously, these are bytes in the write buffer, which are not jet synced to disk.vfs.zfs.l2arc_norwDont read data from the l2 cache while writing to it.vfs.zfs.l2arc_feed_againControl if the l2arc is feed vfs.zfs.l2arc_feed_secs (set to 0)or depending on the amount of data written is dynamicly adjusted between vfs.zfs.l2arc_feed_min_ms and vfs.zfs.l2arc_feed_secs. (set to 1)vfs.zfs.l2arc_noprefetchThis controls if the zfs prefetcher (zfetch) should read data from the l2 arc when prefetching. It does not control if prefetched data is cached into l2. It only controls if the prefetcher uses the l2 arc to read from.vfs.zfs.l2arc_feed_min_msMin time between l2 feeds (see vfs.zfs.l2arc_feed_again)vfs.zfs.l2arc_feed_secsNormal, max time between l2 feeds (see vfs.zfs.l2arc_feed_again)vfs.zfs.l2arc_headroomThis value multiplied by vfs.zfs.l2arc_write_max, results in the scanning range for the arc feeder. The l2 feeder scans the tail of the 4 arc lists in the order mfu_meta,mru_meta,mfu_data and mru_data.On each list the tail is scanned for data which is not jet in the l2 cache. The scan stops if vfs.zfs.l2arc_write_max data is found. If vfs.zfs.l2arc_write_max*vfs.zfs.l2arc_headroom was scanned whithout new data exceeding l2arc_write_max, the next list is scanned.vfs.zfs.l2arc_write_boostWrite limit for the l2 feeder directly after boot (before the first arc eviction happend)vfs.zfs.l2arc_write_maxWrite limit for the l2 feeder (see vfs.zfs.l2arc_feed_again allso)vfs.zfs.arc_meta_limitLimits the amount of the arc which can be used by metadata.vfs.zfs.arc_meta_usedSize of data in the arc used by meta data (mru and mfu)vfs.zfs.arc_minMinimal size of the arc, it cat shrink to.vfs.zfs.arc_maxMaximum size of the arc, it can grow to.vfs.zfs.dedup.prefetchDon t know what this is for. sysctl says Enable/disable prefetching of dedup-ed blocks which are going to be freed ???vfs.zfs.mdcomp_disableDisable compression of metadata.vfs.zfs.write_limit_overrideMaximum amount of not jet written data (anon, dirty) in the cache, this Setting overrides the dynamic size which is calculated by the write_limit options below, and sets it to a fixed value instead.vfs.zfs.write_limit_inflatedIf vfs.zfs.write_limit_override is 0, this value is the maximum write limit which can be dynamicly set. It is calculated by multiplying vfs.zfs.write_limit_max with 24 (If a lot of redundancy is used in a pool, 1MB could result in 24 redundant MBs to be written, 24 is the precalculated worst case)vfs.zfs.write_limit_maxThis is used to get vfs.zfs.write_limit_inflated it is set to RAM / 2 ^ vfs.zfs.write_limit_shiftvfs.zfs.write_limit_minMinimum write limit.vfs.zfs.write_limit_shiftsee vfs.zfs.write_limit_maxvfs.zfs.no_write_throttleDisable the write throttle, applications can write at dram speed, until write_limit is reached, then writes are completly stalled until a new empty txg is available.vfs.zfs.zfetch.array_rd_szThis is the maximum amount of bytes the prefetcher will prefetch in advance.vfs.zfs.zfetch.block_capThis is the maximum amount of blocks the prefetcher will prefetch in advance.vfs.zfs.zfetch.min_sec_reapNot realy surevfs.zfs.zfetch.max_streamsThe maximum number of streams a zfetch can handle, not sure it there could be multiple zfetches at work.vfs.zfs.prefetch_disableDisable the prefetch (zfetch) readahead feature.vfs.zfs.mg_alloc_failuresCould be the maximum write errors per vdev before it s taken offline?vfs.zfs.check_hostid?vfs.zfs.recoverSetting this to 1 tries to fix errors that would otherwise be fatal, don t realy now what kinds of errors we talking about.vfs.zfs.txg.synctime_msTry to keep txg commits shorter than this value, by shrinking the amount of data a txg can hold, this works together whith the write limit options above.vfs.zfs.txg.timeoutSeconds betwen txg syncs (writes) to disk.vfs.zfs.vdev.cache.bshiftThis is a bit shift value, read requests smaller than vfs.zfs.vdev.cache.max will read 2^vfs.zfs.vdev.cache.bshift instead, (it dosn t take longer to get this amount instead, and we might get a benifit later if we have this in the vdev cache)vfs.zfs.vdev.cache.sizeSize of the cache per vdev on the vdev level.vfs.zfs.vdev.cache.maxSee vfs.zfs.vdev.cache.bshiftvfs.zfs.vdev.write_gap_limitIt has something to do with two writes beeing merged into one if only this value (bytes?) is between them.vfs.zfs.vdev.read_gap_limitIt has something to do with two reads beeing merged into one if only this value (bytes?) is between them.vfs.zfs.vdev.aggregation_limitIt has something to do with two reads/writes beeing merged into one if the resulting read/write is below this amount of bytes?vfs.zfs.vdev.ramp_rateFreebsd sysctl says: Exponential I/O issue ramp-up rate you are kidding right?vfs.zfs.vdev.time_shift?vfs.zfs.vdev.min_pending?vfs.zfs.vdev.max_pendingMaximum amount of requests in the per device queue.vfs.zfs.vdev.bio_flush_disable?vfs.zfs.cache_flush_disableNo idea what cache we are talking abount here, but it disables flushing to it :-/vfs.zfs.zil_replay_disableYou can disable the replay of your zil logs, not sure why someone would want this, and not simply disable writing a zil?vfs.zfs.zio.use_umaIt has something todo whith how memory is allocated.vfs.zfs.snapshot_list_prefetchPrefetch data when listing snapshots (speed up snapshot listing)vfs.zfs.version.zplMaximum zfs version supportedvfs.zfs.version.spaMaximum zpool version supportedvfs.zfs.version.acl?vfs.zfs.debugSet zfs debug level.vfs.zfs.super_ownerThis user-id can perform manage the filesystem.kstat.zfs.misc.xuio_stats.onloan_read_bufkstat.zfs.misc.xuio_stats.onloan_write_bufkstat.zfs.misc.xuio_stats.read_buf_copiedkstat.zfs.misc.xuio_stats.read_buf_nocopykstat.zfs.misc.xuio_stats.write_buf_copiedkstat.zfs.misc.xuio_stats.write_buf_nocopykstat.zfs.misc.zfetchstats.hitsCounts the number of cache hits, to items wich are in the cache because of the prefetcher.kstat.zfs.misc.zfetchstats.misseskstat.zfs.misc.zfetchstats.colinear_hitsCounts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched linear reads)kstat.zfs.misc.zfetchstats.colinear_misseskstat.zfs.misc.zfetchstats.stride_hitsCounts the number of cache hits, to items wich are in the cache because of the prefetcher (prefetched stride reads)http://en.wikipedia.org/wiki/Stride_of_an_arraykstat.zfs.misc.zfetchstats.stride_misseskstat.zfs.misc.zfetchstats.reclaim_successeskstat.zfs.misc.zfetchstats.reclaim_failureskstat.zfs.misc.zfetchstats.streams_resetskstat.zfs.misc.zfetchstats.streams_noresetskstat.zfs.misc.zfetchstats.bogus_streamskstat.zfs.misc.arcstats.hitsTotal amount of cache hits in the arc.kstat.zfs.misc.arcstats.missesTotal amount of cache misses in the arc.kstat.zfs.misc.arcstats.demand_data_hitsAmount of cache hits for demand data, this is what matters (is good) for your application/share.kstat.zfs.misc.arcstats.demand_data_missesAmount of cache misses for demand data, this is what matters (is bad) for your application/share.kstat.zfs.misc.arcstats.demand_metadata_hitsAmmount of cache hits for demand metadata, this matters (is good) for getting filesystem data (ls,find, )kstat.zfs.misc.arcstats.demand_metadata_missesAmmount of cache misses for demand metadata, this matters (is bad) for getting filesystem data (ls,find, )kstat.zfs.misc.arcstats.prefetch_data_hitsThe zfs prefetcher tried to prefetch somethin, but it was allready cached (boring)kstat.zfs.misc.arcstats.prefetch_data_missesThe zfs prefetcher prefetched something which was not in the cache (good job, could become a demand hit in the future)kstat.zfs.misc.arcstats.prefetch_metadata_hitsSame as above, but for metadatakstat.zfs.misc.arcstats.prefetch_metadata_missesSame as above, but for metadatakstat.zfs.misc.arcstats.mru_hitsCache hit in the most recently used cache , we move this to the mfu cache.kstat.zfs.misc.arcstats.mru_ghost_hitsCache hit in the most recently used ghost list we had this item in the cache, but evicted it, maybe we should increase the mru cache size.kstat.zfs.misc.arcstats.mfu_hitsCache hit in the most freqently used cache we move this to the begining of the mfu cache.kstat.zfs.misc.arcstats.mfu_ghost_hitsCache hit in the most frequently used ghost list we had this item in the cache, but evicted it, maybe we should increase the mfu cache size.kstat.zfs.misc.arcstats.allocatedNew data is written to the cache.kstat.zfs.misc.arcstats.deletedOld data is evicted (deleted) from the cache.kstat.zfs.misc.arcstats.stolenkstat.zfs.misc.arcstats.recycle_misskstat.zfs.misc.arcstats.mutex_misskstat.zfs.misc.arcstats.evict_skipkstat.zfs.misc.arcstats.evict_l2_cachedWe evicted something from the arc, but its still cached in the l2 if we need it.kstat.zfs.misc.arcstats.evict_l2_eligibleWe evicted something from the arc, and it s not in the l2 this is sad. (maybe we hadn t had enough time to store it there)kstat.zfs.misc.arcstats.evict_l2_ineligibleWe evicted something which cannot be stored in the l2.Reasons could be:We have multiple pools, we evicted something from a pool whithot an l2 device.The zfs property secondarycache.kstat.zfs.misc.arcstats.hash_elementskstat.zfs.misc.arcstats.hash_elements_maxkstat.zfs.misc.arcstats.hash_collisionskstat.zfs.misc.arcstats.hash_chainskstat.zfs.misc.arcstats.hash_chain_maxkstat.zfs.misc.arcstats.pkstat.zfs.misc.arcstats.cArc target size, this is the size the system thinks the arc should have.kstat.zfs.misc.arcstats.c_minkstat.zfs.misc.arcstats.c_maxkstat.zfs.misc.arcstats.sizeTotal size of the arc.kstat.zfs.misc.arcstats.hdr_sizekstat.zfs.misc.arcstats.data_sizekstat.zfs.misc.arcstats.other_sizekstat.zfs.misc.arcstats.l2_hitsHits to the L2 cache. (It was not in the arc, but in the l2 cache)kstat.zfs.misc.arcstats.l2_missesMiss to the L2 cache. (It was not in the arc, and not in the l2 cache)kstat.zfs.misc.arcstats.l2_feedskstat.zfs.misc.arcstats.l2_rw_clashkstat.zfs.misc.arcstats.l2_read_byteskstat.zfs.misc.arcstats.l2_write_byteskstat.zfs.misc.arcstats.l2_writes_sentkstat.zfs.misc.arcstats.l2_writes_donekstat.zfs.misc.arcstats.l2_writes_errorkstat.zfs.misc.arcstats.l2_writes_hdr_misskstat.zfs.misc.arcstats.l2_evict_lock_retrykstat.zfs.misc.arcstats.l2_evict_readingkstat.zfs.misc.arcstats.l2_free_on_writekstat.zfs.misc.arcstats.l2_abort_lowmemkstat.zfs.misc.arcstats.l2_cksum_badkstat.zfs.misc.arcstats.l2_io_errorkstat.zfs.misc.arcstats.l2_sizeSize of the l2 cache.kstat.zfs.misc.arcstats.l2_hdr_sizeSize of the metadata in the arc (ram) used to manage (lookup if someting is in the l2) the l2 cache.kstat.zfs.misc.arcstats.memory_throttle_countkstat.zfs.misc.arcstats.l2_write_trylock_failkstat.zfs.misc.arcstats.l2_write_passed_headroomkstat.zfs.misc.arcstats.l2_write_spa_mismatchkstat.zfs.misc.arcstats.l2_write_in_l2kstat.zfs.misc.arcstats.l2_write_io_in_progresskstat.zfs.misc.arcstats.l2_write_not_cacheablekstat.zfs.misc.arcstats.l2_write_fullkstat.zfs.misc.arcstats.l2_write_buffer_iterkstat.zfs.misc.arcstats.l2_write_pioskstat.zfs.misc.arcstats.l2_write_buffer_bytes_scannedkstat.zfs.misc.arcstats.l2_write_buffer_list_iterkstat.zfs.misc.arcstats.l2_write_buffer_list_null_iterkstat.zfs.misc.vdev_cache_stats.delegationskstat.zfs.misc.vdev_cache_stats.hitsHits to the vdev (device level) cache.kstat.zfs.misc.vdev_cache_stats.missesMisses to the vdev (device level) cache. I could not find a good config online, so I experimented a bit.This config transcodes as little as possible, play, pause, forward, rewind works for Media not transcoded.Using the latest git version of Mediatomb pause is possible in transcoded streams:git://mediatomb.git.sourceforge.net/gitroot/mediatomb/mediatombcommit 27da70598ba9b1d9c4431f3b03bc5e460480abb2Date: Tue Dec 24 21:07:23 2013 +0100 Add support for play/pause/chapters in transcoded streams ?xml version="1.0" encoding="UTF-8"? config version="2" xmlns="http://mediatomb.cc/config/2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://mediatomb.cc/config/2 http://mediatomb.cc/config/2.xsd" server interface alc0 /interface port 50000 /port ui enabled="yes" show-tooltips="yes" accounts enabled="no" session-timeout="30" account user="mediatomb" password="mediatomb"/ /accounts /ui name MediaTomb /name udn uuid:9bf4f28f-0f37-4ccc-bacb-f6cdf8eadd5a /udn home /var/mediatomb /home webroot /usr/local/share/mediatomb/web /webroot storage caching="yes" sqlite3 enabled="yes" database-file mediatomb.db /database-file /sqlite3 mysql enabled="no" host localhost /host username mediatomb /username database mediatomb /database /mysql /storage protocolInfo extend="yes"/ extended-runtime-options ffmpegthumbnailer enabled="yes" thumbnail-size 128 /thumbnail-size seek-percentage 5 /seek-percentage filmstrip-overlay yes /filmstrip-overlay workaround-bugs no /workaround-bugs image-quality 8 /image-quality /ffmpegthumbnailer mark-played-items enabled="no" suppress-cds-updates="yes" string mode="prepend" * /string mark content video /content /mark /mark-played-items /extended-runtime-options /server import hidden-files="no" scripting script-charset="UTF-8" common-script /usr/local/share/mediatomb/js/common.js /common-script playlist-script /usr/local/share/mediatomb/js/playlists.js /playlist-script virtual-layout type="builtin" import-script /usr/local/share/mediatomb/js/import.js /import-script /virtual-layout /scripting mappings extension-mimetype ignore-unknown="no" map from="mp3" to="audio/mpeg"/ map from="ogg" to="application/ogg"/ map from="mpg" to="video/mpeg"/ map from="mpeg" to="video/mpeg"/ map from="vob" to="video/mpeg"/ map from="vro" to="video/mpeg"/ map from="m2ts" to="video/avc"/ map from="mts" to="video/avc"/ map from="asf" to="video/x-ms-asf"/ map from="asx" to="video/x-ms-asf"/ map from="wma" to="audio/x-ms-wma"/ map from="wax" to="audio/x-ms-wax"/ map from="wmv" to="video/x-ms-wmv"/ map from="wvx" to="video/x-ms-wvx"/ map from="wm" to="video/x-ms-wm"/ map from="wmx" to="video/x-ms-wmx"/ map from="m3u" to="audio/x-mpegurl"/ map from="pls" to="audio/x-scpls"/ map from="flv" to="video/x-flv"/ /extension-mimetype mimetype-upnpclass map from="audio/*" to="object.item.audioItem.musicTrack"/ map from="video/*" to="object.item.videoItem"/ map from="image/*" to="object.item.imageItem"/ /mimetype-upnpclass mimetype-contenttype treat mimetype="audio/mpeg" as="mp3"/ treat mimetype="application/ogg" as="ogg"/ treat mimetype="audio/x-flac" as="flac"/ treat mimetype="image/jpeg" as="jpg"/ treat mimetype="audio/x-mpegurl" as="playlist"/ treat mimetype="audio/x-scpls" as="playlist"/ treat mimetype="audio/x-wav" as="pcm"/ treat mimetype="video/x-msvideo" as="avi"/ /mimetype-contenttype /mappings /import transcoding enabled="yes" mimetype-profile-mappings transcode mimetype="video/divx" using="multifunctional"/ transcode mimetype="video/x-msvideo" using="multifunctional"/ /mimetype-profile-mappings profiles profile name="multifunctional" enabled="yes" type="external" mimetype video/mpeg /mimetype first-resource yes /first-resource hide-original-resource yes /hide-original-resource avi-fourcc-list mode="process" fourcc DX50 /fourcc /avi-fourcc-list agent command="/usr/local/bin/mediatomb-multifunctional.sh" arguments="%in %out"/ buffer size="102400" chunk-size="51200" fill-size="20480"/ /profile /profiles /transcoding custom-http-headers add header="transferMode.dlna.org: Streaming"/ add header="contentFeatures.dlna.org: DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=025000 00000000000000000000000000"/ /custom-http-headers /config I use mediatomb-multifunctional.sh from https://vanalboom.org/node/16. I m writing this, because I found it difficult to find a complete description of how things work in the zfs l2 cache. Some informations are very easy to find, but seem to lack details. Here is how I believe it works:1. Format of l2:Every device in the l2 cache is a ring buffer, if new data is written, the oldest data is dropped/overwritten. There are no other priorities to what is dropped. First written is first dropped.The l2 is no arc, it has only one list which is feed from the arc, it does not adapt in any way, caching priorities are fix (see search order below).2. Populating the l2:The l2 is populated by scanning the tail end of the regular (in memory) arc lists up to a certain depth.A new scan is initiated every vfs.zfs.l2arc_feed_secs, it scans until it has found vfs.zfs.l2arc_write_max bytes, eligible for l2 (Not allready in L2, not locked etc.).Each list is scanned from the tail up to vfs.zfs.l2arc_write_max bytes * vfs.zfs.l2arc_headroom.The Arc lists tails are searched in the flowing order: MFU Metadata -> MRU Metadata -> MFU Data -> MRU DataSo the MRU Data list is only searched if there is less then vfs.zfs.l2arc_write_max bytes in the other lists tails.If a scan finds vfs.zfs.l2arc_write_max bytes in the scanned data, it is written to L2.Because the scan only starts every vfs.zfs.l2arc_feed_secs and writes a maximum of vfs.zfs.l2arc_write_max bytes this effectively limits the write bandwidth to the l2 devices.If multiple l2 devices are used, data is written round-robin to the devices. (which means that if they are unequal in size it is more or less random how long data is cached depending on which device the data was written to).3. Cache hits in l2:If data is not in the arc, but in l2, it is read from l2, and cached in the arc as if it would have been read from the primary disks. Nothing happens to the data in l2, it could be evicted shortly after the Hit (but it is in the arc then, and will probably written to the l2 again before it is evicted from arc)Links:https://blogs.oracle.com/brendan/entry/testhttp://mirror-admin.blogspot.de/2011/12/how-l2arc-works.htmlhttp://src.illumos.org/source/xref/freebsd-head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c Es gibt nun 1.000.000 + 1 Online Foto-Rechner, ich hab mir einen gebaut, der an meine Bedürfnisse angepasst ist.Sollte für alle Kameras mit 1.5er Crop passen.Keine Garantie, dass das Ding korrekt rechnet, wenn euch ein Fehler auffällt, sagt bescheid. Lumen:Lumen ist die Maßeinheit für die Lichtmenge die von einer Lichtquelle aus geht.Beispiel: Die Lampe erzeugt 100 Lumen.Wasserbeispiel:Liter die pro Minute aus einem Wasserhahn kommen.Lux:Lux ist die Maßeinheit mit der die Lichtmenge, welche auf einen Punkt trifft gemessen wird.Beispiel: Bei 5m Entfernung hat die Mitte des Lichtkegels 100 LuxWasserbeispiel:Füllstand eines Eimers nach einer Minute im Regen. I recently got the iomega StorCenter ix2 it s a little NAS for home or small office use.Soon it was clear to me that it runs Linux, and a Linux device whithout shell access is hard to bear.After googeling for a day, I found nothing on this subject which would work whith a recent firmware version (2.0.15.43099).So here is what I did to get access:I opened the case to get direct access to the S-ATA HDs, then I connected the HDs to my Linux PC.After booting up, I could see how it is configured:My PC detected the 2 HDs as /dev/sdb and /dev/sdc.Each HD contains 2 Linux software-raid partitions.The first raid partition (1GB) is always raid1 and contains the firmware.The second raid partition is raid1 or linear-raid; this is configurable whith the web interface.After assembling the first raid withmdadm /dev/md0 /dev/sdb1 /dev/sdc1I could mount /dev/md0 to /mntmount /dev/md0 /mnt/md0(the filesystem is ext2).The mounted filesystem contained:# ls -lhdrwxr-xr-x 2 root root 4.0k Mar 14 16:52 imagesdrwx------ 2 root root 16.0k Mar 14 15:00 lost+found-rwx------ 1 root root 512.0M Mar 14 16:54 swapfile# ls -lh images/-rw-r--r-- 1 root root 163.0M Jun 25 20:37 apps-rw-r--r-- 1 root root 5.0M Mar 14 15:03 config-rw-r--r-- 1 root root 416.0k Jun 25 20:37 oemThe files in images/ looked like they contained what I was searching for. To find out the filetype I used file:# file images/*images/apps: Linux rev 0.0 ext2 filesystem dataimages/config: Linux rev 0.0 ext2 filesystem dataimages/oem: Linux Compressed ROM File System data, little endian size 425984 version #2 sorted_dirs CRC 0xd3a158e1, edition 0, 222 blocks, 34 filesThat meant that I could simply mount the config and apps file, as they contained an ext2 filesystem.mount -o loop /mnt/md0/images/config /mnt/configThis image file contained the /etc directory of the storage.Now I could edit the configfiles and changed the following files:Activate ssh:init.d/S50sshThere I changed:start() { echo -n "Starting sshd: " #/usr/sbin/sshd #touch /var/lock/sshd echo "OK"}stop() { echo -n "Stopping sshd: " #killall sshd #rm -f /var/lock/sshd echo "OK"}To:start() { echo -n "Starting sshd: " /usr/sbin/sshd touch /var/lock/sshd echo "OK"}stop() { echo -n "Stopping sshd: " killall sshd rm -f /var/lock/sshd echo "OK"}sshd_configChanged:Subsystem sftp /usr/sbin/sftp-serverTo:#Subsystem sftp /usr/sbin/sftp-serverTo set a password I simply copied the hash from an account of my PC to the shadow file.shadowroot:Hash from my PCs account:10933:0:99999:7:::After unmounting all disks, shutting down my PC, reconnecting the drives to the StorCenter and switching it on, I had access:Starting Nmap 4.76 ( http://nmap.org ) at 2009-06-27 11:15 CESTInteresting ports on storage (192.168.2.11):PORT STATE SERVICE22/tcp open sshMAC Address: 00:D0:B8:03:0B:33 (Iomega)Nmap done: 1 IP address (1 host up) scanned in 0.34 secondsssh root@storageroot@storage's password:BusyBox v1.8.2 (2009-01-09 09:01:03 EST) built-in shell (ash)Enter 'help' for a list of built-in commands.#Some impressions from the comandline:# mountrootfs on / type rootfs (rw)/dev/root.old on /initrd type ext2 (rw)none on / type tmpfs (rw)/dev/md0 on /boot type ext2 (rw)/dev/loop0 on /mnt/apps type ext2 (ro)/dev/loop1 on /etc type ext2 (rw)/dev/loop2 on /oem type cramfs (ro)proc on /proc type proc (rw)none on /proc/bus/usb type usbfs (rw)none on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw)/dev/md1 on /mnt/soho_storage type ext3 (rw,noatime,data=ordered)/dev/sdc1 on /mnt/soho_storage/samba/shares/conny type vfat (rw,fmask=0000,dmask=0000,codepage=cp437,iocharset=utf8)/dev/sdd1 on /mnt/soho_storage/samba/shares/micha type ext3 (rw,data=ordered)# dfFilesystem Size Used Available Use% Mounted on/dev/root.old 3.7M 1.1M 2.5M 30% /initrdnone 61.8M 2.9M 58.9M 5% //dev/md0 980.4M 845.5M 85.1M 91% /boot/dev/loop0 162.3M 135.7M 18.5M 88% /mnt/apps/dev/loop1 4.8M 754.0k 3.9M 16% /etc/dev/loop2 888.0k 888.0k 0 100% /oem/dev/md1 922.2G 118.8G 794.1G 13% /mnt/soho_storage/dev/sdc1 232.8G 201.3G 31.5G 86% /mnt/soho_storage/samba/shares/conny/dev/sdd1 275.1G 549.0M 260.6G 0% /mnt/soho_storage/samba/shares/micha# cat /proc/mdstatPersonalities : [raid1] [raid10] [linear]md1 : active linear sda2[0] sdb2[1] 974727680 blocks 0k roundingmd0 : active raid1 sda1[0] sdb1[1] 1020032 blocks [2/2] [UU]unused devices: # cat /proc/cpuinfoProcessor : ARM926EJ-S rev 0 (v5l)BogoMIPS : 266.24Features : swp half thumb fastmult edspCPU implementer : 0x41CPU architecture: 5TEJCPU variant : 0x0CPU part : 0x926CPU revision : 0Cache type : write-backCache clean : cp15 c7 opsCache lockdown : format CCache format : HarvardI size : 32768I assoc : 1I line length : 32I sets : 1024D size : 32768D assoc : 1D line length : 32D sets : 1024Hardware : FeroceonRevision : 0000Serial : 0000000000000000# iostat sda sdb md0 md1 sdc sdd cpu kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t kps tps svc_t us sy wt id 23 1 4.4 676 15 4.1 24 2 0.0 668 122 0.0 4 1 3.5 2 0 9.9 25 12 13 50# sdparm -C stop /dev/sdc /dev/sdc: ST325082 0A 3.AA# rsync -aPh mk@schreibtisch:/home/mk/Desktop/foodir /mnt/soho_storage/samba/shares/micha/Desktopreceiving file list ...4 files to considerfoodir/foodir/foofile1 0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=2/4)foodir/foofile2 0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=1/4)foodir/foofile3 0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=0/4)sent 92 bytes received 247 bytes 678.00 bytes/sectotal size is 0 speedup is 0.00# lvlvchange lvdisplay lvm lvmdiskscan lvmsar lvremove lvresize lvscanlvcreate lvextend lvmchange lvmsadc lvreduce lvrename lvs# pvpvchange pvcreate pvdisplay pvmove pvremove pvresize pvs pvscan# vgvgcfgbackup vgchange vgconvert vgdisplay vgextend vgmerge vgreduce vgrename vgscanvgcfgrestore vgck vgcreate vgexport vgimport vgmknodes vgremove vgs vgsplit# topMem: 124424K used, 2248K free, 0K shrd, 8588K buff, 89860K cachedCPU: 53% usr 30% sys 0% nice 7% idle 0% io 0% irq 7% softirqLoad average: 1.34 0.96 1.79 PID PPID USER STAT VSZ %MEM %CPU COMMAND18683 18682 root S 4916 4% 65% ssh krausam.de rsync --server --sender -vlogDtpr . /mnt/programme 55 2 root SW 0 0% 10% [pdflush] 1338 31651 root R 2820 2% 7% top26256 740 root S Die Piratenpartei benötigt immer noch Unterschriften, um auf die Wahlzettel für die Bundestagswahl zu kommen.Hier gibt es das benötigte Formular. In den letzten Jahren hat sich einiges in der LED Technik getan. Ich hab mir nun eine LED Taschenlampe (Olight T10) zugelegt, und bin von deren helligkeit sehr beindruckt.Wer nach LED Lampen sucht, sollte sich vorher hier informieren.

TAGS:krausam 

<<< Thank you for your visit >>>

Websites to related :
CMC Modellauto Shop - CMC Modell

  CMC Classic Cars - CMC Modellauto Shop f r Liebhaber und SammlerCMC Modellautos - Alles andere sind SpielsachenCMC Cars - Modellautos mit Liebe zum De

HOME - Sew4Home

  Quilted Jumbo CarryallTightly spaced, straight line quilting goes right across the appliqué for an unbroken pattern that wraps from front to back. We

SackStark.com - immer die strkst

  Our website is primarily and foremost for people like ourselves, namely adult children, looking for exciting, funny, exotic, interesting or just plain

Poladroid project | the easiest

  Easy to use : Drag Drop Generate High-resolution pictures (400 dpi), ready-to-print with a Polaroid design Funny : only 10 treatments per session, lik

Web Portal for Benjamin J Hecken

  Finally got them priced and listed on the site! They are based on the wired Hori Mini Pad.Click here to order and for more information. I m doing a bu

Mini eco

  Hello there. I thought I would pop in quickly to wish you all a wonderful Christmas. I hope you all get a chance to have a good rest over the Christma

Home and Garden | HowStuffWorks

  Home appliances make life easier, but what's really going on inside them? HowStuffWorks Home Appliances articles take a look inside common household a

Good Cause Gifts

  Thanks for visiting! We're currently updating and refreshing our inventory. Our online shop will be back up and running in early July. Good Cause Gift

  On Nutrition Data, you'll find detailed nutrition information, plus unique analysis tools that tell you more about how foods affect your health and ma

Alzheimer's Associ

  The Alzheimer's Association Walk to End Alzheimer's is the nation's largest event to raise awareness and funds for Alzheimer's care, support and rese

ads

Hot Websites